Linux find and replace string in multiple files

On windows, you might have been using text editors that search or search&replace within files in a folder, one such tool i have used in windows is “source edit” by Joacim Andersson (Brixoft Software). that text editor does not seem to be maintained any longer as the developer seems to have moved into making games, but there are certainly many other editors that allow you to do the same thing.

On the other hand, on Linux, I don’t need to do that, the basic tools that come with the operating system allow for that, multi gigabyte files can be searched and have certain text replaced at the speed it takes to read them (Without having to open them for editing)

So, let us assume we have a folder with many text files (Including css or js or html or php files for example), to search that folder, we can combine

grep -Ril "text-to-find-here" /path/to/file/

-R (-r) look for files recursively
-l show file names, not the contents that were found
-i ….

Another tool which is better suited for looking in code is ack (ack-grep) which i will come back to cover in this article, and a newer tool that i have never used is

Now, replacing a string inside a file is simple, there is a cool tool called sed

sed -i '/TEXTTOFIND/ s//TEXTTOREPLACEWITH/g' verylargefile.txt

Now, to find all occurances of a string in all the files in a directory and it’s subdirectories, you can use the following command, Mind you, this is always treated as a regular expression, if your string contains a dot or anything else that is part of regular expression syntax, you will need to escape it, to avoid that, check out the sd command below

find -type f -exec sed -i 's/find/replace/g' {} +

{} + invokes the “exec” command with multiple file names at once, instead of once per file

The sd command

the -s flag disables regular expressions, sd and fd have rust crates, apt install fd-find sd

fd --type file --exec sd 'Find' 'Replace'
Or with backup
fd --type file --exec cp {} {}.bk \; --exec sd 'from "react"' 'from "preact"'

-s : No regular expressions

In some cases you can even forget about fd, as sd is now capable of dealing with multiple files
sd -i "\n" "," *.txt

Gnome terminal tab title

To tell tabs apart fast, you can give every terminal tab a name, just execute the following line inside that terminal window

echo -ne "\033]0;SOME TITLE HERE\007"

I am doing this on the default gnome terminal in Debian 11/12 (Bullseye/Bookworm), older methods no longer work

You will have to execute this every time in every SSH window to a remote machine, I am looking into a way to making the name permanent !

Does this work with putty on windows?

YES ! I have just tested this with the latest putty (2024-11-02), two and a half years after posting this for terminal in gnome, and I can confidently say it does…

New firmware for my Western Digital “My Book Live” NAS storage device

The WD My Book Live is a NAS device based on Debian Linux, Since Debian stopped supporting this processor (APM82181), the device has received no updates and will probably never, so the next best thing to do in my opinion is to install openWRT.

WARNING: I recently got a second MyBook Live device, tried installing 23.05.0 but for some reason, i could not get networking to work, So i simply installed V21, then upgraded to 23… there was probably just something I was missing, but i could not be bothered figuring it out, this was a faster way…

Before you start

1- Only the first few paragraphs of this tutorial (STEPS 1 THROUGH 6) are the instructions you need, the remaining is just for extra reference and in short you don’t need to read it to have your device running, but I do recommend YOU SKIM THE WHOLE THING BEFORE YOU START.
2- This procedure requires you to take the disk out and install it on a PC to switch the firmware, then put it back
3- The upgrade will delete all your data, You will need to move your data that is already on your WD NAS drive somewhere else while the upgrade is ready.

Step 1: Move any existing data BEFORE TAKING APART.

Move any data you may have on the drive to a temporary location outside the NAS drive. this has to be done before taking the drive apart as the unconventional 64 kB block size of the disk will be nothing but trouble if you want to extract the data while mounting the disk to a linux PC for example.

Step 2: Take the disk apart

I have included photos to help you do that, it is not rocket science.

Step 3: Mount the disk on a linux PC (Windows and MAC should work)

and mount it to a linux PC (Windows might work with software such as etcher, but i have no guarantees).

Step 4: Download the openWRT firmware

Go to the drive’s page on the openwrt website (Here), and download it to your Linux (Or windows) PC

Step 5: Write the firmware to the disk.

Decompress the file, then copy it to the drive with a command similar to the command below, but make 100% sure to replace sdx with your own drive designation

 dd if=/root/wdsata.img of=/dev/sdx bs=64k

Write the firmware to the disk, overwriting it, and effectively loosing any data you did not backup in step 1

Step 6: Put the drive back in the enclosure

Nothing to say here, this is the reverse of step 2

Once it is in the enclosure, you can not just connect it to your router as it in itself has this port defined as 192.168.1.1 and is serving dhcp !

Step 8: Create the data partition

At this stage, your device will boot, but you will need to create/expand the data partition, the partition that should not be overwritten when you upgrade the firmware for example.

opkg update
opkg install gdisk blkid openssh-sftp-server block-mount
gdisk -i /dev/sda

As soon as gdisk opens, you may be presented with the following message, if so

Found valid MBR and corrupt GPT. Which do you want to use? (Using the
GPT MAY permit recovery of GPT data.)
 1 - MBR
 2 - GPT
 3 - Create blank GPT

Chose 1 to maintain the 2 partitions we have, Now hit the command (w) to write and confirm, then quit, gdisk has just switched your disk to GPT from MBR, now run gdisk again the same way (gdisk -i /dev/sda)

n for new partition, accept the (3) for partition number, use the number (2097152) for alignment with 4K sector advanced format nearest to the 1GB mark

mkfs.ext4 /dev/sda3
mkdir /share
blkid /dev/sda3

You might find a file named fstab in /etc, this is not the file that needs to be edited, the one you are seeking is in /etc/config/fstab in my case, the UUID was as follows UUID=”9643bd00-f117-4074-a252-7ea30a5174e2″ yours will certainly be different, so in my fstab i added the following lines near the end

config mount
option target '/share'
option uuid '9643bd00-f117-4074-a252-7ea30a5174e2'
option enabled '1'

Now, network sharing is what i was originally interested in when i got this unit, and it is why I am replacing it’s firmware, so to installing samba

opkg update && opkg install samba4-server luci-app-samba4

Now, add the following line to /etc/passwd to add me as a user to the system

yazeed:*:1000:65534:yazeed:/var:/bin/false

Or, if you do not want to add the user manually, you can install the adduser package, and add the users through it like so

opkg install shadow-useradd
useradd yazeed
Unfortunately, this command won't do and you will have to edit it in the passwords file

Now, for either method from the above, run the command

passwd yazeed
smbpasswd -a yazeed

Now, since this is a NAS device, disk tools may be a good idea

opkg install hd-idle luci-app-hd-idle hdparm

To check if disk is spinning, try the command
hdparm -C /dev/sda
The responce active/idle means it is spinning

You are done.

FAQ

Is the hardware and the new openWRT firmware compatible with my 8TB hard drive

Yes it is, I have found many people asking if the hardware supports drives over 2TB, the answer is yes, but you will have to use the GPT rather than the MBR (See steps above)

about the original firmware

What is that vulnerability about

it comes from WDs cloud service, bottom line is that many devices were completely wiped remotely by malicious users and it is unknown if the data itself leaked, so yes, it is very serious

What is the difference between quick factory restore and full factory restore

Quick factory restore is probably what you are looking for, the later seems to do a zero fill on the hard drive after performing a factory restore to disallow data retrieval (For example before you sell it), you can verify this by logging in using SSH, and by the fact that the tool tips state something to that effect.

Inspecting the device

To begin with, I logged in via SSH and inspected some stuff, to enable SSH access on the My Book Live original firmware, you will need to visit a page at a URL such as http://mybooklive/UI/ssh or http://192.168.2.116/UI/ssh (Replace the IP with your own)

the system is based on the following CPU

CPU
processor       : 0
cpu             : APM82181
clock           : 800.000008MHz
revision        : 28.130 (pvr 12c4 1c82)
bogomips        : 1600.00
timebase        : 800000008
platform        : PowerPC 44x Platform
model           : amcc,apollo3g
Memory          : 256 MB

With that out of the way, A look at /etc/apt/sources.list revealed that it is a Debian Distro, the only problem with this is that debian stopped supporting this CPU some time ago, so you can’t go past Debian 8 (Jessie)

deb http://ftp.us.debian.org/debian/ squeeze main
deb http://ftp.us.debian.org/debian/ wheezy main
#deb-src http://ftp.us.debian.org/debian/ wheezy main
#deb http://ftp.us.debian.org/debian/ sid main

Checking the disk info with hdparm revealed that the disk is a WDC WD20EARX-00PASB0, which is as i expected a Caviar Green (SMR disk)

parted (The new fdisk so to speak) shows the following partition scheme for the existing system.

Model: ATA WDC WD20EARX-00P (scsi)
Disk /dev/sda: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system     Name     Flags
 3      15.7MB  528MB   513MB   linux-swap(v1)  primary
 1      528MB   2576MB  2048MB  ext3            primary  raid
 2      2576MB  4624MB  2048MB  ext3            primary  raid
 4      4624MB  2000GB  1996GB  ext4            primary

And a “df -h” reveals

Filesystem            Size  Used Avail Use% Mounted on
/dev/md0              1.9G  555M  1.3G  31% /
tmpfs                 5.0M     0  5.0M   0% /lib/init/rw
udev                   10M  6.7M  3.4M  67% /dev
tmpfs                 5.0M     0  5.0M   0% /dev/shm
tmpfs                 100M  4.6M   96M   5% /tmp
ramlog-tmpfs           20M  4.5M   16M  23% /var/log
/dev/sda4             1.9T  2.1G  1.9T   1% /DataVolume

A good alternative for this Gigabit Lan network attached storage might be openWRT, the same firmware I use for my routers !

there are things you need to know in advance though, first of which is that changing the firmware will require you to delete everything on the drive ! as Western Digital have used an unconventional bunch of things such as a 64 kB block size !

With that out of the way, you can skip down to the installing openWRT about the upgrade process step by step (Including backing up your system), then come back to why etc…

What if i want to revert back to the WD software ?

That is indeed a good question, and to make it easy to do that, I have already backed up the entire disk to another while I am sure that i don’t want to go back. Also worth mentioning that the latest firmware on the WD website dates back to 2015 ! which is at the time of writing 6 years ago !

Where can i find the up to date openWRT distribution for this drive ?

OpenWRT has a page dedicated to this drive, both the single and the Duo here (https://openwrt.org/toh/western_digital/mybooklive)

What are the benefits of the NAS box (enclosure), why not just take out the hard drive and put it in a PC somewhere.

The Western Digital My Book Live has a super low power CPU, and when the disk is spun down, it consumes very little energy (Not a significant load to your UPS for example), It is also fan-less, so it is with the exception of the spinning drive when it is spinning silent, which is also a nice thing, So i would argue that keeping it by updating it’s software is a good idea

Another reason is the amount of relevant software provided through openWRT packages, covering many more things than the original firmware (miniDLNA included).

Errors and resolution

1- I have this error that i have not resolved yet

mv: setting attribute 'user.DOSATTRIB' for 'user.DOSATTRIB': Permission denied

2- The NAS box will not accept many files that windows creates such as Thumbs.db, to allow such files to be stored, This can easily be resolved by editing the samba template and commenting the “veto” files line, then make sure the config is regenerated from the template

How do i keep the system up to date

If you come from a debian background, you would normally apt-get update then apt-get upgrade and that is that, in OpenWRT, there is no such upgrade command, the upgrade command in openWRT is meant to upgrade 1 package specified by name, so the solution is the following line

 opkg list-upgradable | cut -f 1 -d ' ' | xargs -r opkg upgrade

Mounting a disk image in Linux

The new way of mounting a disk image created with dd, dd_rescue, or ddrescue has become much simpler than before, all you need now is to issue the command

losetup --partscan --find --show disk.img

then above will tell you what loop device is being used, let us assume it is /dev/loop0, right after, a quick fdisk -l should show you the partitions, in my case, i have /dev/loop0p1 and /dev/loop0p2

mount /dev/loop0p1 /mnt

now, the first partition is mounted, to reverse this, you will need to first unmount /mnt then, you can delete the loop device with

losetup -d /dev/loop0

At this stage you can mount it like any other device, in read-write mode by default, if you want to mount it in read only mode, you can use the -o switch

sudo mount -o ro /dev/loop0px /hds/loopdevice

Now, if you have DDd a partition rather than a block device (disk) and want to mount it, you can simply mount it as a loop device, and then mount the loop device (loop0) without a Px at the end

Connecting to Windows KVM with VNC and putty tunnel

The setup assumed in this post is as follows, you are working on a remote windows computer, there is a Linux KVM host computer running guest virtual machines somewhere (OS of guest irrelevant), and you would like to connect to a guest machine’s console (which may be running windows, linux, macOS, or any operating system)

KVM, by default only allows people to connect through VNC to the console of a virtual machine if they are using the local host computer, so here are the tips on creating a tunnel to the host computer and connecting to your KVM virtual machine.

Windows does not support VNC very well, (Most VNC servers don’t run well on windows), but the VNC server here is not windows, it is KVM that is providing the VNC server to the guest’s console.

1- Create a tunnel (Putty on windows), simply put, save the connection in putty to that host machine, then under tunnels you will need to have something like this (And go back and hot save again)

Just create a tunnel for port 5900 and the destination localhost:5900 (5901 for the second virtual machine and so on), leave all other tunnel options unchecked/default

2- to know which ones are enabled on your machine run this command

netstat -tlpn | grep 590

3- VNC should now connect to localhost:9500 for example (I am using tightVNC on windows), and that connection should be automatically router to the KVM host, which will display the guest’s console depending on the port (every guest has it’s own port)

BCACHE – how to setup

About this tutorial

Despite being lengthy, this tutorial is in fact easy and fast, I have split it to parts so that you can get down to business instantly if you need to.

Worth mentioning is that i think this simple procedure presents itself as rocket science, it is not, so advise you to dive in (experimenting on a separate computer first may be a good idea), again i assure you it is VERY STRAIGHT FORWARD, the length is because i am elaborating to make it easy.

Disclaimer

This is an effort to put all the information i need about bcache in one place for my referance and your benefit, but please beware, bcache should be run with backup (You will have to come up with things as raid will render the cache redundant for example and rsync for big files might make your CPU do a lot of work), in any case, i am not responsible and will not be held liable for any damage you may endure.

SSDs are the future

When it comes to SSDs, I would say they have come a long way in terms of price, and one day they will be replacing hard drives, I have no doubt about that, there is no advantage in a hard drive that an SSD can’t eventually match (You might argue that TBs written, maybe, but have you tried to check the reliability of a hard drive stressed to the level needed to achieve those TBs written ?).

What is bcache for

Spinning hard drives are fast beasts when it comes to sequential reads, but when it comes to random reads where the head has to go seek the data, they become very very slow, you can be reading at 200MB/s and suddenly drop to 2MB/s, While SSDs do not suffer this much from random reads, slower than sequential, yes, but nothing close to the gap you see in spinning disks, in a spinning disk, the speed difference can be 100 fold OR MORE.

History (Windows)

The earliest attempt that i can remember was Intel robson (2005), Intel robson or intel turbo memory was a feature in the Core 2 CPUs, but i don’t think it made it up to the Core I, it was not very popular and for a good reason, at the extra cost, OEMs could add more ram, not only would it be better for marketing, it also made more sense, as Windows was already introducing memory cache for disks with windows Vista.

Some time later, microsoft came up with Microsoft ReadyBoost (With windows Vista), readyboost relied on fast pen drives to cache the data from the spinning disk, it was not a very popular feature at the time for many reasons, the drawbacks is that they had to design it to be pulled out without affecting data integrity, making restrictions on the writing speed (Writethrough, can’t writeback), and still it was doing the stuff that RAM did perfectly. not to mention that affordable pen drives were not that fast to begin with.

Caching today.

As it is today, caching still makes sense, I would argue it makes more sense than ever, spinning hard disk drives are still much cheaper than SSDs, A good SSD, A 1 TB SSD from samsung is at around $340 for the EVO, and 460 for the pro (Jul 2017), compare that to the spinning disk, with a price tag averaging $40, and you will know that the difference is still around 10 fold, even more if you go up in size, So what do we do ?

The answer is cache the disk. now is a better time to use caching with super fast SSDs that employ wear leveling and are connected in a more stable and persistant connection (SATA inside the computer).

SSD caching On Windows.

On windows, the answer may be ISR (Intel Smart Responce), I have not tried it myself, but i have heard many good things about it, you get into your bios and set the disks are R.A.I.D, then use the Intel Management Engine software to cahce the spinning disk on the SSD, that simple.

I could almost swear INTEL had a software solution for this that was a bit pricy, but i can’t seem to find it, i remember watching a video about it many years ago.

In any case, I am not very experienced with windows, so I will just leave it here.

SSD disk caching on Linux

On Linux, there are many solutions, the one that i will be showing you how to use right now is bcache, because it is fast, efficient, and works on block devices.

So, I am assuming you have installed debian stretch (9), and you have logged in, and you have networking et al running, now, let us get to installing bcache, mind you, bcache has been part of the linux kernel since jessie or even before, so all you need is bcache-tools, in Jessie, you had to compile those with a few lines, in stretch, there is a package for it.

** BCACHE **

To help avoid the confusion, you can use your big hard disk before attaching an SSD, you can then, whenever you want, attach an SSD to it to start the performance gain.

Installing bcache tools in Debian Jessie (8)

** IF YOU ARE INSTALLING ON JESSIE, BCACHE TOOLS WERE NOT PACKAGED FOR JESSIE**

apt-get install git make gcc pkg-config uuid openssl util-linux uuid-dev libblkid-dev

git clone https://github.com/g2p/bcache-tools.git
cd /usr/src
cd bcache-tools
make
make install

** END OF FOR JESSIE **

Installing bcache tools in Debian Stretch (9)

apt-get install bcache-tools

Planning how to setup the drives

In this article, i will be setting up 2 separate disks that are not system disks, one is a 4TB spinning disk, the other is a 1TB SSD, there are a few rules that you need to be aware of though

1- You can cache as many backing devices as you wish with one SSD
2- You can not cache one backing device with more than one SSD

3- There are memory requirements for bcache, so for example dropping the disks in a 486 computer with 256mb ram and using iscsi is not a good idea .

My setup

The backing device is your large spinning disk, the caching device is the SSD

My backing device is a 4TB hard drive that is connected as /dev/sde
My caching device is a 1TB samsung 850evo (alignment considerations here since it is a tlc disk (the pro is MLC, works like a regular with no alignment issues)), connected as /dev/sdc

Setting up the backing device (sde), mounting and populating it with data

You may want to start with the following command to clear any existing filesystem from the drives (Change SDE with your own drive designation)

wipefs -a /dev/sde

Now, let’s format SDE as backing, and SDC as caching

1- Run parted for backing device

parted /dev/sde
mklabel gpt
mkpart primary ext4 0% 100%

2- Make it a bcache backing partition

Using make-bcache, you will use the -B switch to tell the system that this is the backing device, meaning the spinning disk

make-bcache -B /dev/sde1

output from the above will be something like

UUID:                   19d92bc8-8f49-479a-9480-33ca659b91b2
Set UUID:               0e3f386a-ec62-42b9-b0f3-025a09253946
version:                1
block_size:             1
data_offset:            16

3- Format it as ext4 or whatever filesystem you fancy

mkfs.ext4 /dev/bcache0

4- Mounting it like you would mount any other partition

mount /dev/bcache0 /hds/bcache0

5- If you like, you can now copy your data to it and get things ready before installing the caching device (before attaching the SSD as cache).

as i prefer to copy all the files to the spinning disk before attaching the SSD, since when we copy sequential, the SSD does not cache anyway, but the things it does cache are not the things we will use frequently, So i copy my files to it first, then i attach the SSD.

Setting up the caching device (sdc), then attaching it to the backing device

1- Create a partition on caching device (you chose the size you want to use as cache), but i would recommend that if you want to use the whole disk that you leave 10% unpartitioned for over-provisioning.

wipefs -a /dev/sdc

parted /dev/sdc
mklabel gpt
mkpart primary ext4 0% 90%

Using make-bcache, you will use the -C switch to tell the system that this is the caching device, meaning the solid state disk (SSD)

make-bcache -C /dev/sdc1

output from the above will be something like

UUID: eeda3570-eb1b-4983-8c53-76322a654585
Set UUID: 92dbf6ca-0f0b-44d5-b70e-8f1e7919838d
version: 0
nbuckets: 1716964
block_size: 1
bucket_size: 1024
nr_in_set: 1
nr_this_dev: 0
first_bucket: 1

Now, even if this is not for a technical purpose, just to give you the feel of this, try running the command below, the command should result in “no cache” because we did not attach a cache to it yet

cat /sys/block/bcache0/bcache/state

DO NOT Format the caching partition as ext4

this time, we won’t be formatting it in ext4 like the backing device above (think about it, the OS should see the backing device, and at some abstraction layer not even know about this one, so why would it have a file system other than the one that bcache itself understands), we will simply be attaching it to the disk.

Attaching the caching device

If you take a look at the result from make-bcache -C command, you will notice a Set UUID, we will need this unique ID to tell bcache what SSD to connect to what cache, the only cache we have so far is bcache0 as you can see from above, here is how we attach it.

echo 92dbf6ca-0f0b-44d5-b70e-8f1e7919838d > /sys/block/bcache0/bcache/attach

Now, if we run the command above again

cat /sys/block/bcache0/bcache/state

It should read “Clean” or “Dirty” instead of “no cache” (I would bet it reads clean at this stage), Depending on whether something has been written to it and still not in the backing device, or clean otherwise.

Setup all done, unless you want to fine tune it for your purpose, then read on.

Tuning the cache.

1- Caching mode

to inspect what caching mode we are using now

cat /sys/block/bcache0/bcache/cache_mode

Which will probably result in

[writethrough] writeback writearound none

By default, the system uses writethrough (better data integrity), but if you are like me, and have made 100% sure the electric won’t ever go down, or if you backup the data in real time, you might want to switch to writeback, writeback gives much faster write operations which is not necessarily a requierment for all applications.

echo writeback > /sys/block/bcache0/bcache/cache_mode

2- sequential read cutoff

The other thing you might wish to tune is the size of the sequential read/write cutoff, we want a size short enough to be worth caching, by default, it is 4MB, so that everything under 4MB sequential will be cached, I personally like to take that down to 1MB judging by the fact that files larger than 1MB do read pretty fast directly from the disk ! but surely, this will depend on your application and on experimentation with your application.

cache 1 megabyte and smaller

echo 1M > /sys/block/bcache0/bcache/sequential_cutoff

cache everything (special value, not the same mathematical logic of less than)

echo 0 > /sys/block/bcache0/bcache/sequential_cutoff

back to caching 4 mega bytes and smaller (default)

echo 4M > /sys/block/bcache0/bcache/sequential_cutoff

3- Percentage of dirty data to allow on SSD.

I personally like it the way it is (10% of the SSD’s size), but you can change that, and sometimes you have to temporarily change that for certain purposes)

Flush all dirty data to disk as soon as you can

echo 0 > /sys/block/bcache0/bcache/writeback_percent

Allow 10% dirty data

echo 10 > /sys/block/bcache0/bcache/writeback_percent

the first (Value 0) is very usefull when you want to disconnect the cache, to disconnect you want the dirty_data to be 0 on the SSD, so you can start by issuing the first line above, then as soon as all the data is flushed to the backing device, you can disconnect the SSD like i will be showing you further down.

Manipulating the setup

Sometimes, you want to change your SSD with a larger or smaller or newer one, other times, you want to disconnect it and use the backing device without a cahce, other times, you want to use the same caching device to cache more disks, here i will show you how

Assuming you want to disconnect the SSD, for this to happen, you will need to go through a couple of steps, first, make sure there is no dirty data, and second, detach it from the backing device

For the first step, we should inform bcache that we don’t want any dirty data, by default, bcache allows for 10% of the size of the SSD to be dirty data, we need to make that ZERO percent

echo 0 > /sys/block/bcache0/bcache/writeback_percent

remember, if you reattach or otherwise, you should set it back to ten percent in the same way

echo 10 > /sys/block/bcache0/bcache/writeback_percent

Monitoring cache and cache performance

1- How much dirty data is on the SSD, Assuming that “/sys/block/bcache0/bcache/state” reads dirty, you can see how much data is dirty with the command.

cat /sys/block/bcache0/bcache/dirty_data

2- Caching statistics

tail /sys/block/bcache0/bcache/stats_total/*

Force mount hibernated NTFS volume

This problem is one i face often, because of how older versions functioned, the answers online no longer apply, online, you will find that

ntfsfix /dev/sdc2

should do the trick, in reality, it will not as you will see the following error

Mounting volume... OK
Processing of $MFT and $MFTMirr completed successfully.
Checking the alternate boot sector... OK
NTFS volume version is 3.1.
NTFS partition /dev/sdc1 was processed successfully.

The solution in reality is asking ntfs-3g’s mount to remove the hiberfile

WHAT YOU NEED – YOU WILL LOSE THE HIBERFILE

mount -t ntfs-3g -o remove_hiberfile /dev/sdc2 /hds/intelssd

Without the remove_hiberfile instruction, you will probably get an error message such as

Windows is hibernated, refused to mount.
Failed to mount '/dev/sdc2': Operation not permitted
The NTFS partition is in an unsafe state. Please resume and shutdown
Windows fully (no hibernation or fast restarting), or mount the volume
read-only with the 'ro' mount option.

Where you can actually mount it as read only if you do not want to write to it with the line

 mount -o ro /dev/sdc1 /hds/intelssd

Mounting a multipart vmdk disk on Linux

There are many ways to do that, one of which is using the tools provided by vmware to combine the disks into one and then mounting it with

kpartx -av mydisk.vmdk;

Then

mount -o /dev/mapper/loop0p1 /hds/disk

While another method, which is simpler

apt-get install qemu-utils
qemu-img convert disk-s001.vmdk s01.raw
....
qemu-img convert disk-s013.vmdk s13.raw
....
qemu-img convert disk-s032.vmdk s32.raw

The above will be sparse files, so you will not have disk usage as big as the file, a “df -h” should not result in any lost of disk space beyond the data that is used by files in the image

following the above, we need to combine the RAW files like so

cat s01.raw s02.raw s03.raw s04.raw s05.raw s06.raw s07.raw s08.raw s09.raw s10.raw s11.raw s12.raw s13.raw s14.raw s15.raw s16.raw s17.raw s18.raw s19.raw s20.raw s21.raw s22.raw s23.raw s24.raw s25.raw s26.raw s27.raw s28.raw s29.raw s30.raw s31.raw s32.raw > combined.raw
losetup /dev/loop0 combined.raw
kpartx -a /dev/loop0
mount /dev/mapper/loop0p1 /hds/img1

gigabit Ethernet VirtIO driver for Windows 10 64bit

By default, KVM gives your virtual machine a realtek rtl8139 Ethernet adapter, with an ancient 100Mbit/Second speed, we all need gigabit Ethernet adapter for the KVM guest.

The answer is changing the string rtl8139 with virtio in the XML file of the virtual machine, then installing the drivers

The steps i use are

Run virtual machine with the realtek adapter to download the other adapter’s driver
once the adapter is there, shutdown the virtual machine guest (Windows guest), then edit the xml of the guest, and restart libvirtd
start the KVM guest again
open with VNC, start the device manager, install the driver you downloaded.

You are good, the adapter should report the speed of 10Gbit/second (10 gigabit per second)

One annoying thing is that all windows drivers come in a big ISO file, you probably just want the driver you need.

I will add the download links in the coming few days, but you can get them right now if you like from fedora, the fedora windows guest drivers should work on any linux distribution (Debian, ubuntu, etc…)

Connecting the raspberry pi to a wifi network with a static IP

This is a simple thing, there is nothing special about the PI, first, connect it to a wireless dongle (USB is your only option anyways), then create a file with your network settings, because of the scenario this is written for, and it is written as a reference for someone, I will connect it to an android hotspot, the subnet here is specific to android, others should use your own according to their router.

1- create the following file at /etc/wpa_supplicant/wpa_supplicant.conf

network={
ssid="isam"
psk="abcabc1234"
proto=RSN
key_mgmt=WPA-PSK
pairwise=CCMP
auth_alg=OPEN
}

Now, modify the file /etc/network/interfaces

For Signal quality and other relevant information, use the following command

iwconfig