Fastest disk duplication tool

I have been using DD for a long time, specify the block size etc, then pipe it into PV if you like, and there you have it

But you can use PV directly

So let us assume we want to put sda on sdb (See how direction arrows are pointing out of sda in the command)

pv < /dev/sda > /dev/sdb

and you are done, no need for DD, PV is faster because it checks the speed on both disks first, and there you have it

At first it will be much faster than you anticipated, that is because it is buffering in RAM, once you run out of ram, the speed will drop back, even if you dont run out of ram, there will be time for the sync operation

For example, while cloning my 40GB SSD onto an 80GB western digital, at first the speed was 180MB/s, once i ran out of ram, it dropped to 50MB/s

The best way to find deal with duplicate files on Linux

There are 2 popular packages to deal with duplicate files on linux

The first would be fslint (apt-get install fslint), the down side is that it is not sorting by file size, because when i have 10,000 duplicate files on my disks, i really don’t want to deal with them all and make choices, so what comes to mind is this, the second is fdupes, i have never used it before so i would not know, so we will be using fslint with a small script.

first, find the duplicate files, in my case, i only want the ones over 2MBs

/usr/share/fslint/fslint/findup /hds -size +2048k > /root/dups.txt

Now, this little simple script should read the data into a mysql table (Command line PHP script, you will need to edit the mysql username, password and database), you also need to tell it what the path you used in the command above is (I used
“/hds”), also included is the database sql file, you can put that in with PHPMyAdmin

Now, you can run the above script and it will go and investigate file sizes on the file system.

Then, you can either walk through the database after sorting by size, or write your own display script (Fetch and print, nothing too fancy), so you will know where your greatest gains are, and this way you will not lose a day filtering those duplicate files.

Have fun, and please let me know what you think

Connecting the raspberry pi to a wifi network with a static IP

This is a simple thing, there is nothing special about the PI, first, connect it to a wireless dongle (USB is your only option anyways), then create a file with your network settings, because of the scenario this is written for, and it is written as a reference for someone, I will connect it to an android hotspot, the subnet here is specific to android, others should use your own according to their router.

1- create the following file at /etc/wpa_supplicant/wpa_supplicant.conf

network={
ssid="isam"
psk="abcabc1234"
proto=RSN
key_mgmt=WPA-PSK
pairwise=CCMP
auth_alg=OPEN
}

Now, modify the file /etc/network/interfaces

For Signal quality and other relevant information, use the following command

iwconfig

Note on PATA – IDE 1.8″ 2.5″ compatibility and difference

Just because this took a lot of research from me (Looking at specs), I am posting this here for my own reference.

Note: CF cards will work on both 3.3V and 5V. so there is a bit of a mystery on why we need a voltage step down chip and a jumper for this particular adapter, other drives are clearly incompatible IDE interfaces.

it turns out that when combined with a converter using the fc1307a chip to create CF out of 2 SD cards, it will not work without a voltage step down, with the wrong setting, or with cards with no step down, a computer will see the combined cards, but will fail when trying to format them for some reason.

The 1.8 adapter (Sideways IDE interface) is 3.3V, the normal 2.5 disks are 5 Volt, the adapter we have that turns CF cards into 1.8 or 2.5 compatible drive, has a JUMPER to select, this jumper effectivly activates or deactivates the voltage step down chip (The only chip on the board)

So there you have it, 1.8 and 2.5 PATA drives are not compatible regardless of the fact that the PIN interfaces are the same. the adapters with a voltage step down will work on both, but you need to set them up first with a JUMPER

Raspberry PI camera, quick guide

For those who want a real lowdown right away, here is the final verdict, as of the time of writing this, there are no faster options, the limitations here are imposed by the creators of PI and it’s chips (The Broadcom GPU)

You have 2 options for capturing stills in burst mode. In the case i am addressing here, taking photos of rapidly moving objects in broad daylight, so the shutter speed is very fast, the ISO is not so high (Because of the sun).

1- Through the GPUs stills part of the processor, you would get less than 2 frames per second, you can use the timelapse function.

2- Through the video port on the GPU, this will allow a maximum of 15 frames per second at 5MP resolution, no binning, no sensor cropping, but the still port gives somewhat better photo quality

Considerations

The speed of storage counts, if your storage becomes a bottleneck, 15FPS will not be possible.

1- You can use faster USB storage, the USB port is bound to a maximum of 45MB/Second

2- A fast memory card, currently 25MB/s comes on expensive “Professional” SD cards, i would rather use the USB port

3- Splitting the bandwidth between storage and Ethernet is not a good idea, the Ethernet adapter on the raspberry PI is connected through USB, so it will share the USB bandwidth 😉

4- Splitting the storage or buffering some of the frames on the SD card can be a good idea, as the SD card is connected directly to the CPU not through USB

5- Lowering the photo quality through increased performance will give you good mileage, something equivalent to what other programs call 80% quality will reduce the size pretty well without sacrificing much.

How to get through the stills port

raspistill -o %04d.jpg -ss 5000 -ISO 300 -t 10000 -tl 100 -q 10 -th none

So, the questions that come to mind instantly are, how come quality is a value from 0 to 100 and we are using 10 ? it is a good question, and this is something that is not making sense i know, but judging with my eyes, -q 10 will give us good compression ratio and the photo is still good, the range between 10 and 100 does not make much difference, and then under 10, things go very bad rapidly, no idea why

Now to the next method, which is getting photos from the video port… here is a quick and dirty picamera python script

If you don’t know how python works, please don’t change the indentation, indenting the print statement will add the print to the previous code block which is something we don’t want.

#!/usr/bin/python3
import time
import picamera

frames = 60

with picamera.PiCamera() as camera:
    camera.resolution = (2592, 1944)
    camera.shutter_speed = 5000
    camera.iso = 100
    camera.framerate = 10
    camera.start_preview()
    time.sleep(2)
    start = time.time()
    camera.capture_sequence([
        '/usbstick/image%02d.jpg' % i
        for i in range(frames)
        ], use_video_port=True)
    finish = time.time()
print('Captured %d frames at %.2ffps' % (
    frames,
    frames / (finish - start)))



So simply put, if you are running raspbian, you need to get into the pi config (command is raspi-config), and enable the camera then reboot

Also, don’t forget to run “rpi-update” or apt-get upgrade

Now if you try to take a photo you would enter the command

raspistill -o cam.jpg

But this will take a long time to take one photo (5 seconds by default), this is because it needs to set the exposure, if you want to give it less time you could run the command

raspistill -t 1500 -o test.jpg (One and a half seconds)

But that’s is too long, i already know that it is broad daylight, and that i need a very fast shutter speed, So how do i manually set things ?

Taking a photo of fast moving things in the SUN, i would run the command as follows

raspistill -t 1 -ss 1000 –awb sun -o test1.jpg

Taking photos of moving things in cloudy weather, i would run the command as

So, here are the manual things

–ss 1000 (1 ms, good for a sunny day oudoors, 4ms is good on a cloudy day, with dim light, i would recommend bumping this up a bit)

–ISO 300 (Sensitivity, Back in the day, the film dictated this property), the faster the shutter (Lower value), the higher the needed sensitivity (ISO VALUE), the range is 100 to 800

–nopreview

Prsets for Automatic white balance (AWB)

–awb sun

APC is gone, drop in replacements here

APC is gone, no longer maintained, now there are alternatives.

For opcode cache, PHP 5.5 (5.6 shipps with jessie) the opcode cacher ( OPcache ) module is installed and enabled by default, as for key cache (Persistent across pages), we have an alternative called APCu (Just the user key value cache), once installed, the apc_ functions return to PHP so it is a drop in replacement, no no program modifications needed.

To install APCu, you run the following

1- Install the tools
apt-get install apache2-threaded-dev php5-dev php-pear make
2- Before you get APCu-4.0.7, you should check what the latest version is !
pecl install channel://pecl.php.net/APCu-4.0.7

Now you are done, all you need to do is add
extension=apcu.so
to the php config file, in my case, what i do on debian jessie is add it in a file here
/etc/php5/apache2/conf.d/apcu.ini

There you have it, you are back on track.

Over provisioning SSD in linux

Over provisioning a Samsung 1TB 850 EVO

Mind you, Don’t follow this tutorial step by step unless you have a 1TB Samsung 850 EVO, if you have a smaller disk, you need to adapt the numbers to your SSD 😉

Over provisioning a flash disk is simply some un-partitioned space a the end of the disk, but you need to tell the SSD’s controller about that free space that it can use to do it’s housekeeping, You also need to find out if the Tejun Heo’s on-demand HPA unlocking patch applies to your distro, if it does, you need to get kernel patching first.

First of all, the controller will usually use the cache RAM to so the over provisioning, or at least this is what i understood from some text on the Samsung website, you can make things faster by allowing it to use FLASH space while it deletes a 1.5MB flash area to put the data in.

1- How big should the over provisioning area be ?

Samsung recommends 10% of the disk’s space. Somewhere hidden in a PDF on their websites, they explain that OP space should be anywhere between 7% and 50% ! we will use 10 as our writing patterns are not that harsh. but mind you, a database that alters a few rows every second can probably make the most use of such OP space.

2- Won’t that 10% wear out before the rest ?

No, there is a mapping function inside the controller where that space is in fact wherever the controller thinks is appropriate, the wear leveling algorithm kicks in at a stage after the logical stage of partitions etc… it is blind to the file system or the over provisioning area, it will simply remap any address you give it to a random address that is not already mapped, at flash erase, those mappings are deleted, and other areas of the disk will be assigned to that area, i have no idea whether it uses a random algorithm, or simply has a record of flash chip usage (At the size of the sample, that won’t make any difference.)

3- Are you sure we are informing the controller and not just telling Linux what the last address is ?

Sure I’m sure, ask the controller DIRECTLY yourself with the command

smartctl -i /dev/sdb

Before the operation we are doing in this article, it will say 1000204886016, and after it will say

User Capacity:    900,184,411,136 bytes [900 GB]

Meaning that now, the disk’s S.M.A.R.T. attribute tells us that this much is available for the user after the over provisioning operation

So, how do we over provision in linux

See the last secrot of your ssd,

hdparm -N /dev/sdb

In my case, my samsung 850 EVO has the following, notice that the number is repeated twice. x out of x is the same, and HPA is disabled..

max sectors = 1953525168/1953525168, HPA is disabled

Now, 1953525168 * 512 = 1,000,204,886,016 (1 TB !)

Now, we want to set a maximum address, anything after this address is a PROTECTED AREA, that the controller knows about, I will multiply the number above with 0.9 to get the maximum address, take the integer part alone

hdparm -Np1758172678 --yes-i-know-what-i-am-doing /dev/sdb (As hdparm -Np1758172678 /dev/sdb will ask you if you know what you are doing)

 setting max visible sectors to 1758172678 (permanent)
 max sectors   = 1758172678/1953525168, HPA is enabled

Now again, hdparm -N /dev/sdb

max sectors = 1758172678/1953525168, HPA is enabled

Now, to make sure we are not suffering that dreaded bug, let’s reboot the system and check again after that, I am using debian Jessie, so it is unlikely that i am affected

Yup, hdparm -N /dev/sdb still gives us a smaller maximum address than the actual physical

Now, we seem to be ready to talk fdisk business.

fdisk /dev/sdb

Now, if you O (Clean), then P, you should get a line such as

Disk /dev/sdb: 838.4 GiB, 900184411136 bytes, 1758172678 sectors

This means that FDISK understands.and asking it to create (the n command) will yeild this

/dev/sdb1 2048 1758172677 1758170630 838.4G 83 Linux

Arent we happy people.

Now, lets mount with trim support, and enjoy all the beutiful abilities an SSD will bless us with.

tune2fs -o journal_data_writeback /dev/sdb1
tune2fs -O ^has_journal /dev/sdb1

NOTE: in the event that you are presented with an error such as the following

/dev/sde:
 setting max visible sectors to 850182933 (permanent)
SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0a 04 51 40 01 21 04 00 00 a0 14 00 00 00 00 00 00 00 00 00 00 00 00 00 00
 max sectors   = 1000215216/1000215216, HPA is disabled

The most likely cause is the SATA controller (Try executing the hdparm -Np command using a different SATA controller), another possible cause of errors is that some disks require being trimmed before this action

Alliance ProMotion 6410

One little problem about modern VGA cards is HEAT, they consume over 30W on IDLE, those 30 watts are going into the case, so i looked into my old computers, and found a computer that dates back to 1995-1996, I pulled out the VGA card from it, and installed it on a modern I3 computer for testing pending the installation on an I7 with 64GB of ram and what have you.

On ebay, you can find such PCI cards for around $10, Cirrus Logic, SIS, ATI, OR S3, they should all work, if the promotion card works, those should work too.

Now i ran the Debian Jessie installer, the installation went fine, when rebooting, the system boots with the PCI card, but then switches to the embedded graphics system (Comes with the I3 CPU), the BIOS does not allow me to disable that, so, rather than looking for a solution, I will test the adapter on an I7 (Does not come with built in VGA).

I have a good feeling that it will work right away, here is some information about my 20 year old graphics card (Will post some photos too when i plug it out)

    Made by: Alliance
    Codename: ProMotion 6410
    Bus: PCI
    Memory Size: 1MB
    Max Memory Size: 4MB
    Memory Type: FPM
    Year: 1995
    Card Type: VGA
    Made in: USA
    Owned by: Palcal
    Outputs: 15 pin D-sub
    Power consumption (W): 1.5
    Video Acceleration: MPEG-1 (VCD)
    Core: 64bit
    Memory Bandwidth (MB/s): 213
    Sold by: miro
    Press info: Freelibrary

You can find

Upgrading Debian from wheezy to jessie

It is simple, but i am issuing no guarentees, this will probably work for you, but there is a chance that something could go wrong, you have been warned

Loosing network connectivity: One thing that sometimes happens is that the Ethernet number can change from eth0 to eth1 for example (this could happen if your system has not been exposed to other cards, in which case it can turn from eth0 to eth6), What you can do about this is either be physically present to fix it (fix in /etc/network/interfaces) or add another interface to the file so that it can work out of the box, or have KVM over IP, or something of the sort, if you have LXC installed, the containers will not be able to fire up because they must have the following two extra lines in their config, /var/lib/lxc/CONTAINER_NAME/config

lxc.autodev = 1
lxc.kmsg = 0

apt-get update
apt-get dist-upgrade

edit your apt sources (vi /etc/apt/sources.list)

replace the word wheezy with jessie (Wherever you find it)

apt-get update
apt-get dist-upgrade

You should be done

Installing enhanceio on debian Jessie (Not wheezy or squeez, minimum kernel 3.7 onwards)

Using enhanceio (the flashcache fork)

First, on a debian system, you need to compile enhanceio because debian have not yet released anything for it (2015-05-13)

apt-get install git make gcc pkg-config uuid openssl util-linux uuid-dev libblkid-dev python
apt-get install build-essential
apt-get install linux-headers-$(uname -r)

Now let us download enhanceio

git clone https://github.com/stec-inc/EnhanceIO.git
cd EnhanceIO/Driver/enhanceio/
make && make install

On make install you will see
make[1]: Leaving directory '/usr/src/linux-headers-3.16.0-4-amd64'
install -o root -g root -m 0755 -d /lib/modules/3.16.0-4-amd64/extra/enhanceio/
install -o root -g root -m 0755 enhanceio.ko /lib/modules/3.16.0-4-amd64/extra/enhanceio/
install -o root -g root -m 0755 enhanceio_rand.ko /lib/modules/3.16.0-4-amd64/extra/enhanceio/
install -o root -g root -m 0755 enhanceio_fifo.ko /lib/modules/3.16.0-4-amd64/extra/enhanceio/
install -o root -g root -m 0755 enhanceio_lru.ko /lib/modules/3.16.0-4-amd64/extra/enhanceio/

cd /lib/modules/3.16.0-4-amd64/extra/enhanceio/
insmod enhanceio.ko
insmod enhanceio_fifo.ko
insmod enhanceio_lru.ko

#Now check that it is loaded as a kernel module with (HINT: Will it be there after reboot ?)
lsmod | grep enhanceio

cd ../../CLI/
cp eio_cli /sbin/
cp eio_cli.8 /usr/share/man/man8

Now that we have installed and compiled enhanceio, we will simply use it, we are all done, unlike bcache and dmcache, there is minimal setup for this kind of ssd/flash thumb stick.

Now in my setup, i want sdb to act as an SSD cache for sdc1

eio_cli create -d /dev/sdc1 -s /dev/sdb1 -p lru -m ro -c main_disk_cache

The output of that command was as follows, this result can also be obtained with the command “eio_cli info”, or for super detail “cat /proc/enhanceio/main_disk_cache/stats”

Cache Name       : main_disk_cache
Source Device    : /dev/sdc1
SSD Device       : /dev/sdb1
Policy           : lru
Mode             : Read Only
Block Size       : 4096
Associativity    : 256
ENV{ID_SERIAL}=="WDC_WD1001FALS-00J7B0_WD-WMATV0098355", ATTR{partition}=="1"
ENV{ID_SERIAL}=="INTEL_SSDSA2CW120G3_CVPR1481061P120LGN", ATTR{partition}=="1"
Cache created successfully

Now to see the block size
blockdev –getbsz /dev/sdb1 (512 was the result)