Should i learn C before C++

While trying to answer a friend’s question who just messaged me on facebook today about this very matter, i wanted to see if anyone has any idea i missed on this matter, visiting this thread on stack overflow i found the classic debate, here is my take on that classic debate that i stood by for many years (and I’m usually not a stubborn person)

So here is the state of affairs.

1- Camp 1: learn C first, it will help you when you learn C++ and you will be a better C++ programmer

2- Camp 2: You carry too many methods that are bad practices in C++

Well, here is the thing, the course of action i recommend is the following, I also explain exactly why so that if it does not apply to you, you can infer your own judgment

The way I think of it is in terms of how much time learning will take and what you end up with

Even though C is certainly not a prerequisite to C++, and with practice, Someone who started with C++ can be as efficient a C++ programmer as one who started with C, the truth of the matter is that C is ALMOST a subset of C++, there are few minor differences that make some C code not valid for a C++ compiler, but they can be summed up in a page or so.

So let’s get started composing the logic on whether you should learn C before C++ or not.

Objective 1: C will never be of any use to me, i have an assignment in C++ and that is all i care about, getting that assignment over and done with, in this case i recommend starting with C++.
Objective 2: I want to learn C++ so that i can make software.

Assuming you have objective #2

So, C is a small language, learning it from a good book can be around a 230 page read (I recommend Kernighan- Ritchie. The C programming language), once you finish with C, you can train on the basic concepts that you will need plenty in C++, the building blocks inside objects/classes of C++ are technically written in C.

At this stage (stage 1 of 2) Once you know C, C is under your belt, this is a great language for embedded devices, for example a router has a few kilobytes left for you after linux is installed on it’s 4MB of memory, you can’t fit the C++ library in there, C is the tool you need here (with much smaller libraries), a C++ developer will not wrap his head easily around what subset of C++ he/she is allowed to use, but if you know C, you are already productive after the first 230 pages, you can start doing serious things at this stage.

Then, the upgrade to C++, from a time perspective is a simple one, one thing to take care about is to always remember that you need to focus on Object oriented, when i made the switch to C++ many years ago, i paid plenty of attention to the concept of OO, and in no time, i found myself using it professionally when writing in C++, and not using it at all when writing in C, so the case of sloppy programming in C++ for those coming from C is not a big deal, nor has to apply to you, with very little effort you can get over your C habits, you just need to stay aware that you want to use OO.

From my deep understanding of both languages, 90% of what you learn in C is also part of what you will have to learn in C++, so you will not be wasting any time, reading a C++ book when you come from C is much much faster, so my recommendation is that you learn C first. this route will give you the most bang for your time.

I even recommend living in C for a year or so before you make that magical jump to C++. most of Linux’s important software packages are for example written in C not C++, so having good command of C first can be very beneficial.

Backing up disk with DD saving space

The problem with DD is that it copies the whole disk, In reality, the disk could have 10GBs but that dump file has to be of the disk size, lets say 100GBs

So, how do we get a dump file that is only around 10GBs in size.

The answer is simple. Compressing a zero fill file is very efficient (almost nothing).

So, frst we create a zero fill file. with the following command, i recommend you stop the fill while there is still a bit of space on the disk especially if the disk has a database running that could need to insert.. so stop the running fill with ctrl+c before you actually fill the whole disk

cat /dev/zero > zero3.fill;sync;sleep 1;sync;

At this point, you can either delete the zero.fill file or not, It will not make a difference in the dump size, deleting is recommended, but it wont make much of a difference.

Notes
sync flushed any remaining buffer in memory to the hard drive
If the process stops for any reason, keep the file already written and make a second one and a third and whatever it takes, do not delete the existing one, just make sure almost all of your disk’s free space is occupied by zero fill files.

Now, to DD and compression on the fly (So that you won’t need much space on the target drive)

If you want to monitor the dump, you can use pv

dd if=/dev/sdb | pv -s SIZEOFDRIVEINBYTES | pigz --fast > /targetdrive/diskimage.img.gz

Or if you like, you can use parallel BZIP2 like so, in this example this is a 2TB hard drive

dd if=/dev/sda | pv -s 2000398934016 | pbzip2 --best > /somefolder/thefile.img.bz2

without the monitoring

dd if=/dev/sdb | pigz --fast > /targetdrive/diskimage.img.gz

Now, to dump this image back to a hard drive

Note that using pigz for the decompression in this situation is not recommended, somthing along the lines of this

DO NOT USE this one, use the one with gunzip
pigz -d /hds/www/vzhost.img.gz | pv -s SIZEOFIMAGEINBYTES | dd of=/dev/sdd

Will work, but it will decompress the file in place before sending it through the pipe, so the recommended way to do it on the fly is with gunzip, this is also true because there are no benefits from parallel gzip while decompressing

gunzip -c /hds/www/vzhost.img.gz | pv -s SIZEOFIMAGEINBYTES | dd of=/dev/sdb

Or

pigz -d /hds/www/vzhost.img.gz | dd of=/dev/sdd

My records
The following are irrelevant to you, this is strictly for my records

mount -t ext4 /dev/sdb1 /hds

dd if=/dev/sdc | pv -s 1610612736000 | pigz --fast > /hds/www/vzhost.img.gz

One that covers doing for part of a disk

Assume i want to copy the first 120GB of a large drive where my windows partition lives, I want it compressed and i want the free space cleared

first, in windows use SDELETE to zero empty space

sdelete -z c:

Now, mount the disk on a linux partition

dd if=/dev/sdb bs=512 count=235000000 | pigz --fast > /hds/usb1/diskimage.img.gz
dd if=/dev/sdb bs=512 count=235000000 | pbzip2 > /hds/usb1/diskimage.img.gz

If it is advanced format, you would probably do
dd if=/dev/sdb of=/hds/usb1/firstpartofdisk.img bs=4096 count=29000000

or something like that

Now, if we have a disk image with the extension (.bin.gz) and we want to extract it to a different directory, we can pipe it as follows

gunzip -c /pathto/my_disk.bin.gz > /targetdir/my_disk.bin

Shrinking linux disks in vmware workstation

Here is the theory behind what we are doing

1- Fill all empty space with zeros, you can do that by writing a gigantic file full of zeros to fille up all empty space then crash when no space is left to put the file

cat /dev/zero > zero.fill;sync;sleep 1;sync;

Delete the file we just made, zeros are left behind

rm -f zero.fill

Shut down the VM, and go to the windows host running the vmware workstation

Navigate to the directory where the .vmdk files are located.

Changing the root password in an LXC container

If you forget your LXC container’s password, you can reset it from within the LXC host

1- chroot into the containers filesystem
chroot /var/lib/lxc/vm51/rootfs

2- issue the passwd command and enter the new password for the container

3- type exit to get back to the LXC host prompt

 

Another way is to simply fire the container up, then run

lxc-attach -n vm51

then execute the passwd command like you normally do, then the exit command

It is very important to understand that if you don’t have something such as fail2ban on your server, it could be that someone had bruit-forced into your container and changed the root password, in that case, i would completely recommend deleting the whole container and re-creating it from scratch

The reason is that we don’t know what the attacker (if any) had installed inside the system

Tar error and how to overcome

For some reason, while i was extracting half a terrabyte of a tar.gz file with the following command

tar -xvf thisfile.tar.gz

i got the following errors

tar: Skipping to next header
tar: Error exit delayed from previous errors

So, it turns out that tar files terminate with a big bunch of zeros, to tell the tar files to not consifer that bunch of zeros a terminator, you would use the -i switch (before the F not after)

So the command would look like

tar -xvif thisfile.tar.gz

Seems it worked for me, it may or may not work for you, but this is one of the reasons you could get this error. because tar dopes not tell you what the exact error is.

Using axcel, quick example

Using axcel

axel -a -s 10240000 -n 5 URL

-a is the nicer one line view
-s is maximum speed, here it is 100Mb (10MB)
-n is the maximum number of connections

———————————–

To download a list of files
1- Put them in a text file (Make sure the line feeds are linux (\n))
2- Run a while loop from terminal

while read url; do axel -a -n3 $url; done < /root/download124.txt

Tar and compress directory on the fly with multi threading

There is not much to it. the tar command piped into any compression program of your choice.

For speed, rather than using gzip, you can use pigz (to employ more processors / processor cores), or pbzip2 which is slower but compresses more

cd to the directory where your folder is in

then

tar -c mysql | pbzip2 -vc > /hds/dbdirzip.tar.bz2

for more compression
tar -c mysql | pbzip2 -vc -9 > /hds/dbdirzip.tar.bz2

for more compression and to limit CPUs to 6 instead of 8, or 3 instead of 4, or whatever you want to use, since the default is to use all cores
tar -c mysql | pbzip2 -vc -9 -p6 > /hds/dbdirzip.tar.bz2

tar cvf – mysql | pigz -9 > /hds/dbdirzip.tar.gz

Or to limit the number of processors to 6 for example
tar cvf – mysql | pigz -9 -p6 > /hds/dbdirzip.tar.gz

Now, if you want to compress a single file to a different directory

pbzip2 -cz somefile > /another/directory/compressed.bz2

Dynamic Round Robbin DNS (DDNS with round robin support)

We have just developed an application in-house for Dynamic DNS with round robin (for our own “validation through IP” purposes) that functions as a dynamic DNS with round robin features.

We could make this application public if it gets enough attention and is of use to many people.

The application is fully functional at the minute, but if it gets attention, we can improve the user interface, and make it public.

The Dynamic DNS with round robin support takes into account that connections that have not contacted for update should be removed from the round robin record.

* Username and password verification
* modifiable ttl in sync with frequency of IP checks
* Almost infinatly Scalable system.
* PHP client, easy to create any other client. PHP update script can run as cron job.
* remove frm list when no update requests is received for a user set amount of time, return to list once an update request is sent again
* Super fast
* For multi homed links, the password per hostname (not per zone) eliminates the risk of an update request through a different eithernet adapter that is the main link of another machine.
* Security through MD5 sums that change with the IP change (Your passwords are never transmitted during an update)
* if 2 machines are using the same IP address, the anti_duplicate_valuesarray will limit the round robin records to that value only once.
* the dead record currently only dissapear when a different IP changes, 2DO: change must reflect when another updates !

Disable Windows has detected a hard disk problem message in windows

The following are the steps to disable the error message associated with a bad hard drive, the message that windows will display after every login, we will disable it from within windows without disabling it in the BIOS.
error

The message above reads (On my computer, on yours, the disk model number and the names of the volumes will probably be different.

Windows has detected a hard disk problem.
Back up your files immediately to prevent information loss, and then contact the computer manufacturer to determine if you need to repair or replace the disk

Then, you are presented with the following two options

Start the backup process

Ask me again later
-- If the disk fails before the next warning, you could lose all of the programs and documents on the disk.

In the show details dialogue you should see 

Immediate steps
Because disk failure will cause you to loose all programs, files and documents on the disk, you should back up your important information immediately, try not to use your computer until you have repaired or replaced the hard disk.
Which disk is failing
The following hard disks are reporting failure.
Disk name: TOSHIBA MK3264GSXN ATA Device
Volume: C:, D:, E: 

My advice would be

Do not disable S.M.A.R.T. from BIOS, rather, ask windows not to display this message, this is because for a failing disk, you would want the S.M.A.R.T. data accessible from other programs or to keep an eye on it.

To disable this error message from within windows, do the following

click the start button and enter the word “task” in the search box, Task Scheduler should appear, right click it and chose run as administrator.
Once it is open, follow the tree to your left as follows
“Task Scheduler Library” => “Microsoft” => “Windows”. => “DiskDiagnostic”

As shown in the image, select the second entry, right click it, then click disable.

The following is the dialogue
diskdiag

close, and restart your computer to check if it worked.