Force mount hibernated NTFS volume

This problem is one i face often, because of how older versions functioned, the answers online no longer apply, online, you will find that

ntfsfix /dev/sdc2

should do the trick, in reality, it will not as you will see the following error

Mounting volume... OK
Processing of $MFT and $MFTMirr completed successfully.
Checking the alternate boot sector... OK
NTFS volume version is 3.1.
NTFS partition /dev/sdc1 was processed successfully.

The solution in reality is asking ntfs-3g’s mount to remove the hiberfile

WHAT YOU NEED – YOU WILL LOSE THE HIBERFILE

mount -t ntfs-3g -o remove_hiberfile /dev/sdc2 /hds/intelssd

Without the remove_hiberfile instruction, you will probably get an error message such as

Windows is hibernated, refused to mount.
Failed to mount '/dev/sdc2': Operation not permitted
The NTFS partition is in an unsafe state. Please resume and shutdown
Windows fully (no hibernation or fast restarting), or mount the volume
read-only with the 'ro' mount option.

Where you can actually mount it as read only if you do not want to write to it with the line

 mount -o ro /dev/sdc1 /hds/intelssd

Mounting a multipart vmdk disk on Linux

There are many ways to do that, one of which is using the tools provided by vmware to combine the disks into one and then mounting it with

kpartx -av mydisk.vmdk;

Then

mount -o /dev/mapper/loop0p1 /hds/disk

While another method, which is simpler

apt-get install qemu-utils
qemu-img convert disk-s001.vmdk s01.raw
....
qemu-img convert disk-s013.vmdk s13.raw
....
qemu-img convert disk-s032.vmdk s32.raw

The above will be sparse files, so you will not have disk usage as big as the file, a “df -h” should not result in any lost of disk space beyond the data that is used by files in the image

following the above, we need to combine the RAW files like so

cat s01.raw s02.raw s03.raw s04.raw s05.raw s06.raw s07.raw s08.raw s09.raw s10.raw s11.raw s12.raw s13.raw s14.raw s15.raw s16.raw s17.raw s18.raw s19.raw s20.raw s21.raw s22.raw s23.raw s24.raw s25.raw s26.raw s27.raw s28.raw s29.raw s30.raw s31.raw s32.raw > combined.raw
losetup /dev/loop0 combined.raw
kpartx -a /dev/loop0
mount /dev/mapper/loop0p1 /hds/img1

Windows 10 slow shutdown on SSD (Solved)

SSDs are the best thing that happened to computer boot time (and many other things) since the invention of the abacus

But for some reason, booting up is faster than shutting down, much faster, Shut downs are taking a long time (Or reboots)

So let me see what i can do about this

1- Windows ClearPageFileAtShutdown is something that happens before shut down, and is my first guess to why this is happening
So let us set the following key to zero (0) and see if this speeds up shutdown time.

HKEY_LOCAL_MACHINE\CurrentControlSet\Control\SessionManager\Memory Management then ClearPageFileAtShutdown set to (0)

This session should shut down slowly, the next time you boot, shutdown will be much faster.

The other thing that i am thinking is relevant is changing the location of the indexing service index files to my spinning disk, this is because the spinning disk has thousands of files, and i would like to keep my SSD fast for certain other applications.

Aligning your Samsung 840 EVO – Slow disk problem

This probably applies to both 840 evo and 850 evo, but not the EVO 840 PRO and the 850 evo pro because the pro are not TLC

All over the internet, people are saying that solid state drives don’t need to be aligned because they will scramble the used flash cells anyway for wear leveling.

This is absolutely NOT TRUE, although wear leveling does work that way (to put it in simple terms), the mapping algorithm that levels the writes maps blocks to other blocks.

So here is how it works, let us assume there was no wear leveling, when the partition is not properly aligned to a starting offset which is a multiple of the erase block size, writes and erase operations that should require the erasing of one block could end up erasing and writing to two blocks, now the block is a hardware restriction, so when the wear leveling algorithm selects a new location, the problem of erasing two cells instead of one is still valid.

Don’t take my word for it, mess up the alignment of one of your partitions, then examine reads and writes of 512 or 4K, both will be much slower.

Now, what you need to do is to align the file system to block size

Because this disk has a 1.5M erase block 1536 KiB and to be sure we want it to also align with 2048 KiB (Just in case the erase block is not the whole story), you can set the sector alignment value to 12288 (6144 KiB), which is a multiple of 1536 KiB and 2048 KiB.

So, in LINUX, even though it is usually correctly aligned by the partitioning software (And in windows it is already done for you and if not it can be done by samsung’s magician software), you can check the current alignment with.

fdisk -l /dev/sdb

For your own math, the EBS (Erase block size) on those drives is 1.5MBs

So basically, 12288 is 3*4k, the three comes from the fact that it is a three level cell (TLC)

Over provisioning SSD in linux

Over provisioning a Samsung 1TB 850 EVO

Mind you, Don’t follow this tutorial step by step unless you have a 1TB Samsung 850 EVO, if you have a smaller disk, you need to adapt the numbers to your SSD 😉

Over provisioning a flash disk is simply some un-partitioned space a the end of the disk, but you need to tell the SSD’s controller about that free space that it can use to do it’s housekeeping, You also need to find out if the Tejun Heo’s on-demand HPA unlocking patch applies to your distro, if it does, you need to get kernel patching first.

First of all, the controller will usually use the cache RAM to so the over provisioning, or at least this is what i understood from some text on the Samsung website, you can make things faster by allowing it to use FLASH space while it deletes a 1.5MB flash area to put the data in.

1- How big should the over provisioning area be ?

Samsung recommends 10% of the disk’s space. Somewhere hidden in a PDF on their websites, they explain that OP space should be anywhere between 7% and 50% ! we will use 10 as our writing patterns are not that harsh. but mind you, a database that alters a few rows every second can probably make the most use of such OP space.

2- Won’t that 10% wear out before the rest ?

No, there is a mapping function inside the controller where that space is in fact wherever the controller thinks is appropriate, the wear leveling algorithm kicks in at a stage after the logical stage of partitions etc… it is blind to the file system or the over provisioning area, it will simply remap any address you give it to a random address that is not already mapped, at flash erase, those mappings are deleted, and other areas of the disk will be assigned to that area, i have no idea whether it uses a random algorithm, or simply has a record of flash chip usage (At the size of the sample, that won’t make any difference.)

3- Are you sure we are informing the controller and not just telling Linux what the last address is ?

Sure I’m sure, ask the controller DIRECTLY yourself with the command

smartctl -i /dev/sdb

Before the operation we are doing in this article, it will say 1000204886016, and after it will say

User Capacity:    900,184,411,136 bytes [900 GB]

Meaning that now, the disk’s S.M.A.R.T. attribute tells us that this much is available for the user after the over provisioning operation

So, how do we over provision in linux

See the last secrot of your ssd,

hdparm -N /dev/sdb

In my case, my samsung 850 EVO has the following, notice that the number is repeated twice. x out of x is the same, and HPA is disabled..

max sectors = 1953525168/1953525168, HPA is disabled

Now, 1953525168 * 512 = 1,000,204,886,016 (1 TB !)

Now, we want to set a maximum address, anything after this address is a PROTECTED AREA, that the controller knows about, I will multiply the number above with 0.9 to get the maximum address, take the integer part alone

hdparm -Np1758172678 --yes-i-know-what-i-am-doing /dev/sdb (As hdparm -Np1758172678 /dev/sdb will ask you if you know what you are doing)

 setting max visible sectors to 1758172678 (permanent)
 max sectors   = 1758172678/1953525168, HPA is enabled

Now again, hdparm -N /dev/sdb

max sectors = 1758172678/1953525168, HPA is enabled

Now, to make sure we are not suffering that dreaded bug, let’s reboot the system and check again after that, I am using debian Jessie, so it is unlikely that i am affected

Yup, hdparm -N /dev/sdb still gives us a smaller maximum address than the actual physical

Now, we seem to be ready to talk fdisk business.

fdisk /dev/sdb

Now, if you O (Clean), then P, you should get a line such as

Disk /dev/sdb: 838.4 GiB, 900184411136 bytes, 1758172678 sectors

This means that FDISK understands.and asking it to create (the n command) will yeild this

/dev/sdb1 2048 1758172677 1758170630 838.4G 83 Linux

Arent we happy people.

Now, lets mount with trim support, and enjoy all the beutiful abilities an SSD will bless us with.

tune2fs -o journal_data_writeback /dev/sdb1
tune2fs -O ^has_journal /dev/sdb1

NOTE: in the event that you are presented with an error such as the following

/dev/sde:
 setting max visible sectors to 850182933 (permanent)
SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0a 04 51 40 01 21 04 00 00 a0 14 00 00 00 00 00 00 00 00 00 00 00 00 00 00
 max sectors   = 1000215216/1000215216, HPA is disabled

The most likely cause is the SATA controller (Try executing the hdparm -Np command using a different SATA controller), another possible cause of errors is that some disks require being trimmed before this action

Alliance ProMotion 6410

One little problem about modern VGA cards is HEAT, they consume over 30W on IDLE, those 30 watts are going into the case, so i looked into my old computers, and found a computer that dates back to 1995-1996, I pulled out the VGA card from it, and installed it on a modern I3 computer for testing pending the installation on an I7 with 64GB of ram and what have you.

On ebay, you can find such PCI cards for around $10, Cirrus Logic, SIS, ATI, OR S3, they should all work, if the promotion card works, those should work too.

Now i ran the Debian Jessie installer, the installation went fine, when rebooting, the system boots with the PCI card, but then switches to the embedded graphics system (Comes with the I3 CPU), the BIOS does not allow me to disable that, so, rather than looking for a solution, I will test the adapter on an I7 (Does not come with built in VGA).

I have a good feeling that it will work right away, here is some information about my 20 year old graphics card (Will post some photos too when i plug it out)

    Made by: Alliance
    Codename: ProMotion 6410
    Bus: PCI
    Memory Size: 1MB
    Max Memory Size: 4MB
    Memory Type: FPM
    Year: 1995
    Card Type: VGA
    Made in: USA
    Owned by: Palcal
    Outputs: 15 pin D-sub
    Power consumption (W): 1.5
    Video Acceleration: MPEG-1 (VCD)
    Core: 64bit
    Memory Bandwidth (MB/s): 213
    Sold by: miro
    Press info: Freelibrary

You can find

Recovering deleted files from ext4 partition

Update, although extundelete restrored most of my files, some files could only be restored with no file name through an application that is installed with testdisk called photorec

So, what happened was that i added a directory to eclipse, a message appears, i hit enter accidentally, all the files in the web directory are lost, no backup, years of programming…

Instantly, i shut down the computer so that i do not overwrite the disk space with new files and logs and the like, i got a larger disk (1.5tb) and did the DD first (i recommend gddrescue in place of DD just in case your disk has bad sectors).

Installed Linux (Debian 7) on the new disk, installed the hard drive from the other computer to the new PC, then installed the software i always use to recover files, testdisk, test disk did not work as expected, on both disks, when it comes to the ext4 partition, the process that ended in an error would be as follows

testdisk
create log file
Chose the 1TB disk (the one with the deleted files
Partition type (INTEL)
Advanced
Chose the main partition *(ext4) and chose List (left and right arrow keys)
Damn, the error.

TestDisk 6.13, Data Recovery Utility, November 2011
Christophe GRENIER <grenier@cgsecurity.org>
http://www.cgsecurity.org
 1 * Linux                    0  32 33 119515  60 33 1920010240
Can't open filesystem. Filesystem seems damaged.

So, i quit TestDisk and installed
apt-get install extundelete

extundelete /dev/sdb1 --restore-directory /var/www

This way, i only restore the files from that direcotry, if you want all the deleted files, you could surely use something like

extundelete /dev/sda4 --restore-all

Anyway, my files are back, a few are missing, but i am sure i can deal with that

Intel processor Lithography explained

In short, it is the average space between the processor’s logic gates (transistors).

It makes all the difference in speed, and a considerable difference in power consumption.

For example, i ran a certain task on both of the following processors

E3300 which is a low cost celeron processor with a lithography of 45nm and (1M Cache, 2.50 GHz, 800 MHz FSB)
Q6600 Which is a much more expensive (at the time when both were purchased) with a lithography of 65nm and (8M Cache, 2.40 GHz, 1066 MHz FSB)

When comparing a single core’s throughput, the cheap celeron processor beat the quad core by a very considerable number, much higher than the difference in clock speed, The actual numbers would need me to explain many factors such as the nature of the millions of records that needed processing, how they were processed, how jobs were distributed between computers, how the random sample is guaranteed to be random and so on, and i don’t think this is very relevant to you.

So, lithography is something you should really consider when buying a processor, the lower the better, my laptop’s I7 is built with a lithography of 22nm, this is the best number as of 2013.

Disk spindown in linux, specifeying spindown idle time

Update/2025: The old info below (Startiong with the title spindown) no longer works as what used to be expected back then, the new expected behavior changed, today, if you try this on your drives, it may or may not work due to a bug-fix that fixes a bug that is 15 years old !

So there are 2 conditions

  • the device is not attached via USB or Firewire
  • supports APM

For the impatient, a workaround is to use udev rules, you start by checking that the following command “grep . /sys/class/block/sdb/device/power/*” to find out what’s currently set for autosuspend control, if it returns either “control:on” or “autosuspend_delay_ms as empty”, proceed, So let us get down to business

So, to test this out, try it without making it perminent, the following line should spin down your disk after 1 minute

echo 5000 | sudo tee /sys/class/block/sdc/device/power/autosuspend_delay_ms
echo auto | sudo tee /sys/class/block/sdc/device/power/control

If the above worked for you, you can simply add the following rules file to make those settings permanent

Create the file “/etc/udev/rules.d/99-spindown-disks.rules” and put the following contents in it

ACTION=="add", SUBSYSTEM=="block", KERNEL=="sd[a-z]", TEST=="device/power/autosuspend_delay_ms", ATTR{device/power/autosuspend_delay_ms}="15000"
ACTION=="add", SUBSYSTEM=="block", KERNEL=="sd[a-z]", TEST=="device/power/control", ATTR{device/power/control}="auto"

A different work around is using the package “https://github.com/adelolmo/hd-idle” 😉

Spinning the disks down manually (hdparm -Y /dev/sdc) works instantly, no problems there !

Setting this directly with the “hdparm -S 240 /dev/sdb” for example should work (need to check), but not through hdparm.conf !

So how do i know this criteria is what is stopping me from using the config file to spin things down ? i tried this command

hdparm -B /dev/sda

/dev/sda:
APM_level = not supported

Disk Spin down (Tested with Bullseye 2022)

Even though everything concerning block devices in linux has shifted to unique identifiers, hdparm has not, and will still use the old /dev/sdx system

To control disk spindown, and to manually issue commands, you will need to have the package installed

apt-get install hdparm

There is a probelm with disk spindown via hdparm, the problem is that you must address a disk as /dev/sdc , which changes in the case of USB media and other disks, even when you add slaves,

hdparm -Y /dev/sdb will spin a disk down instantly
hdparm -S 240 /dev/sdb will set this disk to sleep when idle for 20 minutes (5 second units here)

or adding at the bottom of the file /etc/hdparm.conf a section such as

/dev/sdc {
spindown_time = 240
}

to make those changes persistent across reboots.

The new way of doing this is using the disk ID, to find the disk ID, run the command

ls -l /dev/disk/by-id

once you know your disk ID, the block should look like this

# My 3TB WD green 
/dev/disk/by-id/ata-WDC_WD30EZRX-00MMMB0_WD-WMAWZ0299541 {
spindown_time = 240
}

To check the status of a disk, here is what you do

hdparm -C /dev/sde

You could get one of the following results
When spun down…
drive state is: standby
When active
drive state is: active/idle

Don’t make your disks spin-down too often, 20 minutes is good for me almost in all circumstances.

If the disks don’t spin down, chances are that selftest is enabled…

Check if it is enabled with

smartctl -a /dev/sdb
if it reads
Auto Offline Data Collection: Enabled.
then you need to disable it with
smartctl --offlineauto=off /dev/sdb

then wait for them to finish (if a test is running) then spin down.

Can i mount a disk image created with dd , ddrescue , or dd_rescue on Windows ?

The lowdown: Yes you can, try the free OSFMount.

How i found out about it ? a friend sent me his laptop to un-dlete files for him, i didn’t have time to see how i can un-delete under windows, so (with his permission) i mounted his laptop hard drive on my computer (Linux), then DDd the whole drive to a 250GB image file, put the hard drive back where it was (in the laptop), and sent it back to him so that he can continue using it, once i found the time, i simply copied the image to a Windows computer, mounted it with OSFMount, then un-deleted everything with Recuva (the best un-delete software in my opinion), put his files on an external hard drive and sent it his way.

Images created with dd , ddrescue , or dd_rescue are not formatted, they are the direct copy of a whole disk, including boot records, partition tables, and file system, so mounting such images should not be hard at all, and indeed, turns out there is a program that can mount them under windows (i would not be surprised if it turns out there are hundreds that do that), but for now, this seems to be a champ, and it seems to be free.

Yet, this program seems to be more than a mounting tool for direct disk images, it also mounts CD images (i guess the one i currently use (virtual clone drive) is obsolete, creation of RAM disks, and can open a big bunch of other image formats (nrg, SDI, AFF, AFM, AFD, VMDK, E01, S01).

So there you are, all you need for your disk mounting needs in 1 program 😀

Cheers