Changing the root password in an LXC container

If you forget your LXC container’s password, you can reset it from within the LXC host

1- chroot into the containers filesystem
chroot /var/lib/lxc/vm51/rootfs

2- issue the passwd command and enter the new password for the container

3- type exit to get back to the LXC host prompt

 

Another way is to simply fire the container up, then run

lxc-attach -n vm51

then execute the passwd command like you normally do, then the exit command

It is very important to understand that if you don’t have something such as fail2ban on your server, it could be that someone had bruit-forced into your container and changed the root password, in that case, i would completely recommend deleting the whole container and re-creating it from scratch

The reason is that we don’t know what the attacker (if any) had installed inside the system

Tar error and how to overcome

For some reason, while i was extracting half a terrabyte of a tar.gz file with the following command

tar -xvf thisfile.tar.gz

i got the following errors

tar: Skipping to next header
tar: Error exit delayed from previous errors

So, it turns out that tar files terminate with a big bunch of zeros, to tell the tar files to not consifer that bunch of zeros a terminator, you would use the -i switch (before the F not after)

So the command would look like

tar -xvif thisfile.tar.gz

Seems it worked for me, it may or may not work for you, but this is one of the reasons you could get this error. because tar dopes not tell you what the exact error is.

Using axcel, quick example

Using axcel

axel -a -s 10240000 -n 5 URL

-a is the nicer one line view
-s is maximum speed, here it is 100Mb (10MB)
-n is the maximum number of connections

———————————–

To download a list of files
1- Put them in a text file (Make sure the line feeds are linux (\n))
2- Run a while loop from terminal

while read url; do axel -a -n3 $url; done < /root/download124.txt

Tar and compress directory on the fly with multi threading

There is not much to it. the tar command piped into any compression program of your choice.

For speed, rather than using gzip, you can use pigz (to employ more processors / processor cores), or pbzip2 which is slower but compresses more

cd to the directory where your folder is in

then

tar -c mysql | pbzip2 -vc > /hds/dbdirzip.tar.bz2

for more compression
tar -c mysql | pbzip2 -vc -9 > /hds/dbdirzip.tar.bz2

for more compression and to limit CPUs to 6 instead of 8, or 3 instead of 4, or whatever you want to use, since the default is to use all cores
tar -c mysql | pbzip2 -vc -9 -p6 > /hds/dbdirzip.tar.bz2

tar cvf – mysql | pigz -9 > /hds/dbdirzip.tar.gz

Or to limit the number of processors to 6 for example
tar cvf – mysql | pigz -9 -p6 > /hds/dbdirzip.tar.gz

Now, if you want to compress a single file to a different directory

pbzip2 -cz somefile > /another/directory/compressed.bz2

Dynamic Round Robbin DNS (DDNS with round robin support)

We have just developed an application in-house for Dynamic DNS with round robin (for our own “validation through IP” purposes) that functions as a dynamic DNS with round robin features.

We could make this application public if it gets enough attention and is of use to many people.

The application is fully functional at the minute, but if it gets attention, we can improve the user interface, and make it public.

The Dynamic DNS with round robin support takes into account that connections that have not contacted for update should be removed from the round robin record.

* Username and password verification
* modifiable ttl in sync with frequency of IP checks
* Almost infinatly Scalable system.
* PHP client, easy to create any other client. PHP update script can run as cron job.
* remove frm list when no update requests is received for a user set amount of time, return to list once an update request is sent again
* Super fast
* For multi homed links, the password per hostname (not per zone) eliminates the risk of an update request through a different eithernet adapter that is the main link of another machine.
* Security through MD5 sums that change with the IP change (Your passwords are never transmitted during an update)
* if 2 machines are using the same IP address, the anti_duplicate_valuesarray will limit the round robin records to that value only once.
* the dead record currently only dissapear when a different IP changes, 2DO: change must reflect when another updates !

mysqldump by example

MySQL Dump is probably one of the best tool to take copies of databases, and it comes with MySQL, so you don’t need to install more stuff.

Example 1: Backup all databases (Use with caution, see below)
IMPORTANT: if you dump this to another server, you will lose all users on the target server, this is because the database named mysql (not the database engine but the actual database that has the users) on the target server is overwritten by the one from the source

Added note: If you want to monitor how large the uncompressed dump file has gone, or in other words, how much data mysqldump has brought so far, you can use PV, in this example, i expect the data to be 123GBs so i put that in so that PV can tell me what percentage of that i have finished, it will tell me the exact number of bytes anyways, but this is visually easier.

mysqldump --opt -u root --password="yourpass" databasename | pv -s 123g | pigz -c > dumpfile.sql.gz

 

1- Dump all databases

mysqldump -u root --password="thispassword" --all-databases > thisdatabasedump.sql

2- Dump all databases and gzip compress the output file, gzcompress is like compression used in zip files, this command compresses on the fly (make sure gzip is installed)

mysqldump --opt -u root --password="thispassword" --all-databases | gzip -9 > thisdatabasedump.sql.gz

3- Dump all databases and BZIP compress the output file, BZIP compression is better than gzip compression, but takes significantly more time, like the one before this command compresses on the fly (make sure you have bzip2 installed)

mysqldump --opt -u root --password="thispassword" --all-databases | bzip2 > thisdatabasedump.sql.bz2

4- If you have a server with multiple processors, you can overcome the slowness of bzip2 by simply making all the CPUs (real or virtual or hyper threaded) work on compressing at the same time, the application is called parallel bzip2 (make sure pbzip2 is installed)

mysqldump --opt -u root --password="thispassword" --all-databases | pbzip2 > thisdatabasedump.sql.bz2

4.5- If your server has 8CPUs and you only want 7 of them to do zipping so that one of them can be dedicated to mysqldump

mysqldump --opt -u root --password="thispassword" --all-databases | pbzip2 -p7 > thisdatabasedump.sql.bz2

I will not give any more examples about compression, obviously, as you can see from the examples above, to compress on the fly all you need to do is replace the section ( > thisdumpfile.sql ) with ( | pbzip2 > thisdumpfile.sql.bz2 )

5- Dump a certain database to the file

mysqldump --opt -u root --password="thispassword" thisparticulardbsname > thisdumpfile.sql

6- Dump certain databaseS

mysqldump --opt -u root --password="thispassword" --databases db1name db2name db3name db4name > thisdumpfile.sql

7- Dump certain tables from within a database

mysqldump --opt -u root --password="thispassword" databasename table1name table2name table3name > thisdumpfile.sql

8- Exclude certain tables from the mysqldump

 mysqldump --opt -u username --password="thispassword" databasename --ignore-table=databasename.table1 --ignore-table=databasename.table2 > database.sql

Disable Windows has detected a hard disk problem message in windows

The following are the steps to disable the error message associated with a bad hard drive, the message that windows will display after every login, we will disable it from within windows without disabling it in the BIOS.
error

The message above reads (On my computer, on yours, the disk model number and the names of the volumes will probably be different.

Windows has detected a hard disk problem.
Back up your files immediately to prevent information loss, and then contact the computer manufacturer to determine if you need to repair or replace the disk

Then, you are presented with the following two options

Start the backup process

Ask me again later
-- If the disk fails before the next warning, you could lose all of the programs and documents on the disk.

In the show details dialogue you should see 

Immediate steps
Because disk failure will cause you to loose all programs, files and documents on the disk, you should back up your important information immediately, try not to use your computer until you have repaired or replaced the hard disk.
Which disk is failing
The following hard disks are reporting failure.
Disk name: TOSHIBA MK3264GSXN ATA Device
Volume: C:, D:, E: 

My advice would be

Do not disable S.M.A.R.T. from BIOS, rather, ask windows not to display this message, this is because for a failing disk, you would want the S.M.A.R.T. data accessible from other programs or to keep an eye on it.

To disable this error message from within windows, do the following

click the start button and enter the word “task” in the search box, Task Scheduler should appear, right click it and chose run as administrator.
Once it is open, follow the tree to your left as follows
“Task Scheduler Library” => “Microsoft” => “Windows”. => “DiskDiagnostic”

As shown in the image, select the second entry, right click it, then click disable.

The following is the dialogue
diskdiag

close, and restart your computer to check if it worked.

The Linux DD command

To detect the progress or how far dd has come in a running copy, open a second terminal window, run top to get the id of the dd process, then issue the command kill -USR1 xxxx (replace xxx with the actual ID of the process), now it may appear that nothing happened, but swicth the terminal to the one dd is running in

you should see something like

1036902161+0 records in
1036902160+0 records out
530893905920 bytes (531 GB) copied, 29702.1 s, 17.9 MB/s

Recovering deleted files from ext4 partition

Update, although extundelete restrored most of my files, some files could only be restored with no file name through an application that is installed with testdisk called photorec

So, what happened was that i added a directory to eclipse, a message appears, i hit enter accidentally, all the files in the web directory are lost, no backup, years of programming…

Instantly, i shut down the computer so that i do not overwrite the disk space with new files and logs and the like, i got a larger disk (1.5tb) and did the DD first (i recommend gddrescue in place of DD just in case your disk has bad sectors).

Installed Linux (Debian 7) on the new disk, installed the hard drive from the other computer to the new PC, then installed the software i always use to recover files, testdisk, test disk did not work as expected, on both disks, when it comes to the ext4 partition, the process that ended in an error would be as follows

testdisk
create log file
Chose the 1TB disk (the one with the deleted files
Partition type (INTEL)
Advanced
Chose the main partition *(ext4) and chose List (left and right arrow keys)
Damn, the error.

TestDisk 6.13, Data Recovery Utility, November 2011
Christophe GRENIER <grenier@cgsecurity.org>
http://www.cgsecurity.org
 1 * Linux                    0  32 33 119515  60 33 1920010240
Can't open filesystem. Filesystem seems damaged.

So, i quit TestDisk and installed
apt-get install extundelete

extundelete /dev/sdb1 --restore-directory /var/www

This way, i only restore the files from that direcotry, if you want all the deleted files, you could surely use something like

extundelete /dev/sda4 --restore-all

Anyway, my files are back, a few are missing, but i am sure i can deal with that