The true downsides of C++

I am currently working as Ruby software developer, and this is quite uncommon for the developer that used to be C++ developer in the past.

Usually, when I tell somebody about my previous experience, they reply: “Oh, that damn C++. We heard there are so many problems…”. And when I reply: “Oh, this is interesting, please tell me more”, they usually start with: “Well, I heard that memory management is very complicated…”. And I was saying that this is not true for so long time, so that I decided to create a post about it.

So, what are illusion (fake) problems of C++

  • Memory leaks

Well, that used to be true some time ago. In modern C++ this is definitely not. Smart pointers rule the world of modern C++ development, and they allow you to write nice and almost memory-leak free code.

The general idea is that use pass objects instead of pointers, and this objects manage the lifetime of the instance. One of the possible implementations is reference counting.

  • Complexity of the language itself

This is only partly true. Some subsets of the language (for example templates) are complicated, especially if you read alexandresku but mostly you can just stay out of these features.

Multiple inheritance is also hard, but you usually can inherit one real class and abstract classes (interfaces). Macroses are hard, but just try not to use them.

Without all this features C++ is still powerful and simple languages, that is quite easy to use.

And what are the the real main problems of C++?
  • No package manager

I think this is the biggest problem in modern C++. Language means not only syntax, but also infrastructure around it. And C++ lacks it.

All modern languages understand that package manager is critical, that’s why Rust has “cargo” from the very beginning. Python has pip, Ruby has gems and bundler. C++ got some prototypes, for example conan, but it is far from perfect.

That’s why all C++ developers are usually not reusing the existing code, but just write it from scratch.The only common library that is used everywhere in C++ is STL. And it lacks a lot of functionality.

I was once working on Chromium project (very big, and definitely has a lot of dependencies, and wanted to understand how do they deal with dependencies) and found out that they created their own solution – DEPS file.

  • Does not apply for full stack development

C++ nowadays means you are working on small part of application, and this small part is usually quite low level. Even in modern desktop world, there be much better ways then Qt (for example, JavaScript) to build UI. If I need to name most common applications in the modern world than it will be

  • Writing other languages
  • Writing core functionality that

I have worked in several companies where they choose to use C++ for all layers (including UI), and that was mostly waste of time and resources – there are much better ways to write it nowadays – for example .Net or even JavaScript.

Why is that bad? Sometime you code the black-box and don’t see the global results of your work.

Why to learn C++ than?

But still I understand the great benefits from being the C++ developer. These guys usually understand better different algorithms and  data structures. Moreover, they usually understand the complexity of the new language easier then the other people.

So, to summarize I think that C++ is not very good for the industry nowadays. In web, it is better to take a look at Go, and in embedded world – at Rust. But hire ex-C++ developer is usually a good choice.

Oh my zsh and my custom prompt


On the one side, this is not really a development topic. It is about tools. Or, specifically, about command prompt in you favorite shell.

This is one of topics that is never asked on any interview, but still quite important for every developer. Because terminal and command line is what you deal with every day in your developer life.

So, let’s start from the classical default bash prompt on OSX, that looks quite ugly.

It shows you your current computer name, folder you are currently working in and your username. So far, so good. But just to get understanding on how can you improve it, here is my customized oh my zsh prompt.

Screen Shot 2016-04-27 at 23.02.16


So, what is the real difference between default OSX prompt and customized one?

First of all, it shows you much more information than the default prompt. It shows you full directory path, checks the last return code of operation and even shows you if this directory is under git control, and git branch in that case. This information is highly customizable for your needs, and you can add there only what is really needed. In my specific case, I need full path to folder (to get the fully understanding where I am now all the time), and computer name (because it is common for me to work using ssh on completely different machine, and I need to see clearly where I am. And of course, see on what branch are you currently developing is useful for every developer.

Now, about colors. Many of us are still using terminal the same way it was used 30 years ago, but actually the progress is here with us and one can use the full color scheme in his/her terminal. So, why not? Colors can really help you to concentrate only on the information you need. For example, green arrow can show you that the last command was finished successfully, and red error shows you that there was error.

And, last but not least, I prefer two-liner. When the prompt is one line long, and you are trying to fix all the information you need there, you basically don’t have place for work.


There are many ways to implement these functionality, but we will do it the easiest way. I think, easiest and the most popular way is oh my zsh framework.

Basically, it is collection of various functionalities over zsh. Of course, one can implement it on his own even in bash, but it will probably take a lot of time from you. In oh my zsh you can go to the themes folder and create a new entry there. Of course, you can also modify (or even reuse if you don’t need customization) any existing theme. But I found at least one problem with update – all oh my zsh files are under git control, and that is how it does update – simple by git pull mechanism. So, this modified file will prevent your whole framework from updating.

Here is my customized oh my zsh scheme.

I, as an enterprise software developer, tried to isolate (mostly for readability) different parts of prompt, so that even inexperienced person can easily understand how is it combined.

So, the file itself it quite self describable. Please use and modify it =)

Deploying mediawiki on Openshift

I will continue writing my blog with the article describing how to deploy your own application to the cloud. This topic is quite new to me because I am professional C++ multi-platform developer, but web is a new industry to me, so in this topic a am kinda newbie.

Why to deploy to cloud? Why can’t we use “normal” hosting for this? Of course, in our case we can do it (and, to speak the truth, probably it is better option) because I don’t need my system to be SO scalable now, but I thought deploying to cloud it would be a great experience and it will help me in the future.

I decided to start with Openshift solution, thought I have been thinking about other possibilities as well (including Google Cloud, Heroku and Amazon Solutions – you can check more about it here), but I decided to use Openshift because they have free account with quite a good functionality (for example 1GB database for free).

Just to notice, Openshift does not have they own services, instead they are using Amazon for hosting. One can check this using following command

Step 1. Creating account and setting up environment.

So, to start with, we need to register at Openshift

openshift_signupThen we need to define the namespace for our applications. All applications of the same user share the same namespace by using the application URL

openshift_change_namespaceNext step is to install OpenShift RHC Client Tools. Most steps we are performing can be done both using web interface and using client tools, but for the most task client tools are just faster. Note, that client tools installer does not only install tools on your computer, but also do some post-steps, for example sending your SSH public key to server. After everything is done, your screen must have something like this


Step 2. Creating application

In the terms of Openshift all applications consist from cartridges. There are framework cartridges (you must have one and only one cartridge in your application) and embedded cartridges(that include additional functionality). Framework cartridge examples include PHP, Ruby, Perl, Node.js, etc. Embedded cartridges include various databases (including MySql and MongoDB), Jenkins build server, Cron scheduler, and many others types of applications.

I am going to deploy MediaWiki, with database on MySql, so I will create two cartridges, PHP cartridge because MediaWiki is written using PHP and MySql because it is preferred database for the MediaWiki. Node that same things are possible to do using RHC command line tools, but I will use web console here. So, pressing the “Add application” button…


and choose PHP cartridge there. In the next menu one should notice the scaling menu


As for now, it is not possible to change the application type from Scaling to Non-scaling. I will choose No-scaling, because we need additional cartridges for scaling, and we have only three for free.

After adding framework cartridge, we need to add additional embedded cartridge for MySql and phpMyAdmin(nice GUI interface for database management) and also create alias for this site(because I will going to set up redirecting using CNAME later). After all the changes we will have the following picture


Step 3. Uploading application to Openshift and fixing configuration

Uploading application to Openshift is quite easy. You just need to download it from MediaWiki download page and upload to GIT repository on server. If you need to remember your GIT application link, you can use the command

After pushing changes, different post actions come in place, and the application is deployed automatically (I am using SourceTree Gui client, but of course you can use command line, if you want to).


Step 4. Mediawiki configuration.

Ok, now our application is successfully deployed, but setup has not been made yet. If we type our site address, the will see something like this


Just one small, but very important note. Somewhere in the middle of setting up the database, you need to enter your database credentials. You can easily find out database name and credentials


but how to find out database IP? It turns out, that Openshift have environment variables for all database values


After filling all the fields, the LocalSettings.php is generated, and you need to put it to your main php directory.


It is better to change this file setting to use the original environment values instead of their current values (because the values are subject to change).


Next step is to add salt to our LocalConfig.php. We need to obtain salt here and paste it to config file as well.

Ok, that all. Our new mediawiki is running successfully.

Step 5. Setting CNAMES for our wiki.

Ok, our wiki is ready and running on but what if I need better name for it, for example ?

I need to buy domain. I used GoDaddy for it, although I could use something else. Notice, that the alias on the Openshift side is set already (see the previous post for details)


Now, we need to login to , choose our domain and run DNS Manager


We need to add CNAME alias for one site. The reason why we need to add CNAME entry, not A entry is that Openshift can change server IP. In case of A entry, we can only set IP-address.

One must note, that the changes will not work immediately, it can take several hours to update all the DNS.

And now, the last step! I want not able to find information about it anywhere, and since this was my first web site deploy, I did not even know whats wrong.

So, basically you need to change the LocalSetting.php in the following way

OK, now everything is ready and we can type and yes, we got it!



Nfc moscow metro tickets analyse

Some time ago I have received package with NFC reader inside (please don’t tell me that it is already included in most of the new mobile phones, my dream is to buy new Nexus 10  tablet from Google with NFC, but in Russia it cost amazing 600 euros). Still, it is possible to use ordinary USB reader that look like this one


Generally, what is NFC? NFC – near communication field, the modern technology of wireless connection that work on a very small distances (several centimeters). Many new contactless payment systems (paypass), transport and metro card, identifications are build using NFS. Even new modern European and Russian passports have NFC chips with various data about passport holder inside.

So, lets try to read data using NFS. What can we read? Okey, we are from Moscow, so the easiest way to start is to read underground tickets. It works already for 1, 2, 5, 10, 20 rides. I have not tried 1 months passes yet. Brief description of Moscow metropolican tickets.

So, Moscow metropolitan ticket is actually a Mifare Ultralight card. These card are very cheap, but you can only store 64 bytes and they do not have any cryptographic security. Moreover, first 4 bytes are used for the system information. Dump the card information and lets try to analyze it


I used different colors to show bits that I can understand – why do we need them and how does the system use them. These fields are card number (located on the other side of the card), number of unused travels, date is issue, expiry date, ticket type and even turnstile number. By the way, this information is not really that secret, there are several Android programs that can read metro tickets, but I have not find any that can read turnstile number (by the way using this number it is possible to find out the last station where the person entered the underground). I even have a marketing idea – mobile application “Check your husband” =) When your husband comes home too late and tells you that he had too much work in his office, you can check his metroticket – maybe the last station it completely different =)

Interesting things to note: Programmer guys are used to the fact that the data is aligned at least by bytes, but this is not the case, the padding is very strange (see upper screenshot)

Example of my program output


By the way, the program uses libNFC, is open-source and one can download it here Ok, now we are able to read information from the card, but is it possible to write information to the card, and is it possible at all – please read next topics.

UEFI – welcome note

We will cover some topic here on what is UEFI (former EFI) and how to deal with it. We will cover topic, dealing with bootability, OS loading, kernel loading and so on. The question, what is EFI, will be discussed later, so I am assuming you are currently already familiar with basic concepts. So, let’s just start working.

There are three stages of UEFI implementation in the motherboard.

Stage 1 –  EFI(also known as UEFI version 1), is implemented on Apple computers. That’s how MacOS is loading, and that one of the reasons, why installing Windows on Apple computer involves installing special loader, BOOTCAMP (we are not currently talking about different filesystems for Windows and MacOs, HFS and NTFS). EFI is 32-bit. We will cover EFI later.

Stage 2 is UEFI mixed with BIOS (old way of loading). 99% or UEFI computer are now backwards compatible with BIOS for the simple reason – many OSes currently lack good UEFI support. Generally, UEFI stage 2 is 64-bit (although theoretically 32-bit UEFI is possible). In stage 2 computers one can enable UEFI by checking checkbox in BIOS (see picture). Now, almost all 64-bit Intel motherboards, and some others as well support Stage 2.


Stage 3 computers do not have backward compatibility with BIOS, so only OSes that support UEFI can be installed on the computer. Stage 3 computers are quite rare now, and I have not see one, although Intel claim they have one.

Now, we came to the topic “What OSes currently have UEFI support”. Windows 7 64-bit and Windows Vista 64-bit have quite good UEFI support, and it really works. Talking about Linux, RedHat Linux and Fedora (starting from version 11) claim that they support UEFI, but this is now not stable, mostly because of the problems with UEFI versions of grub and lilo (again, we will talk about it later in details).

Let’s try install Windows 7 on UEFI Stage 2 computer. Tick the UEFI support in BIOS (as shown in picture above), restart your computer and press F10, when asked. You will notice now boot menu and you need to choose item with text UEFI and CD/DVD (name differs in different implementations). And, here we go, installation starts. Before Windows loads it’s own video drivers, image can have problems, because current versions of UEFI video drivers are quite bad.



One must understand that UEFI boots from GPT, not MBR disks (at least first stage loader),so Windows installer will ask to format your disk to GPT. All other installations steps are the same, and when after installing the computer will reboot and load Windows, you will not notice the difference from the normal way of booting.


Let’s now see what we got “under the cover”. First, open the Disk Manager and notice that the disk is GPT, not MBR, and that we got additional EFI system partition on the drive. EFI System partition is the special partition where all EFI first-stage loader are located. It has special GUID, and that’s how UEFI locates it


Normally, Windows does not mount this partition on with drive letter, because it is internal, but you can force it to be mounted using mountvol command with special syntax. mountvol letter: /s. Let’s try calling this command under “Administrator”

(assuming that we don’t have Z: drive yet) and than browse it, using, for example Total Commander (also under Administrator priviledge).

We now notice, that this partition containg EFI version of Windows Boot Manager. We will talk about it in the next topic.


If we browse GPT disk using diskpart, we will notice, that the disk has one more partition, that is now shown in Disk Manager. This is Microsoft Reserved Partition and Microsoft creates it on every GPT drive, even if it is not bootable. Microsoft uses this partition to convert from basic disk to dynamic on GPT, because both GPT and LDM(dynamic disks) store their metadata at the end of the drive (opposite to MBR disks, that contains metadata in the beginning of the disk). This partition is not very important for us, because we will not covert dynamic disks on GPT right now.





Linux commands that help me to create a correct environment for debugging

To begin with, I am rather inexperienced Linux developer. All 4 years of commercial development I have used Windows + Visual Studio only. Building software for many platforms is not common for me. So, I created a list of commands (as well as software) and other hints that help me to debug my software on Linux. When I say Linux, it 75% means Mac Os too.

And yes, I am using Ubuntu 8.10.


Ctrl+X+A  starts GUI mode.Usually I am debugging code using Netbeans, but there are situations where I need to debug code on remote computer, but installing Netbeans or even set up gdbserver will take too much time. So, I just use Ctrl+X+A keys and gdb switches to simple GUI mode. Very useful



This programs shows library dependencies of some executable file. If the file is not executed for some reason, the first thing we need to check is ldd. In windows the LIB (included with VS) do something like the same.


Shows who owns the file. In windows I always used Unlocker application for it (installed separately).


Shows you the kernel boot messages. Used together with grep. Example:


which shows you the full path to the specific executable file, whereis do the same + shows you manuals and sources. Type shows you alias for the command


Slocate is more generalized version of whitch and whereis. It performs a quick search over database with the list of all files. The database is updated using cron every day. Example:

Code review and gained experience

During code reviews I am trying to write down (to remember) all my defects (bugs in my code). Some of them are trivial (like “Functions names must be listed in alphanumeric order” or “Use Pascal casing for member variables”) but some are not so, and I want to list them here. Of course, one must understand that during code review it is possible to find only simple defects. If you have error in code algorithm, it is not likely to be found during code review.

After 3 or four bugs in code review, you will probably gain an experience to found this defects in your code “on fly”. The things noted here are well known. If you are an experienced developer, you probably know already about the stuff. In that case, let it be just a note.

Using C++ cast style

Initially the C++ was just a wrapper over C and all casts in C (no matter if they are legal or not). C++ has not banned old style of cast, but invented four new casts const_cast, static_cast, reinterperet_cast and dynamic_cast.

Example (incorrect):

Example (correct):

It is a good style never to use old C cast in new code.

Using smart pointers

Creation of objects using new/delete operators is possible but should be avoided. Different types of smart pointers must be used everywhere it is possible. In our code we use std::auto_ptr and boost::smart_ptr. auto_ptr is light weighted but it is not possible to use it in STL containers and as a member of classes, because of the problem with the ownership.


 Use const everywhere it is possible

C++ has a const keyword, although many developers just forget about it. After designing every new function interface, one must always think about cast.

Example (incorrect):

Example (correct):

 Invalidate the function return in case of error

This is the rule I disagree with, but still it is our project rule. If the functions notifies the upper layer about error using error code it must invalidate it’s output.

Example (incorrect):

When this function returns false, the parentProcessId output is invalid. This is done for the following reason: if the guy who uses this function forgets to check the error code, he will not be able to continue working with parentProcessId (there can be a case when it is valid before GetParentProcessId execution).

Example (correct):

 Single return point concept

There is no single rule about single return error concept, but different developers have different points of view on the problem. Using single return point, it is easier to debug code (you can be sure that it is the only exit point from the function) and easier to do free-resources operations on exit. Second point can be fixed inventing smart pointers (see 2), but point 1 is still important.

Example (correct):

On the other point of view, when there are many validations in the code, the code that uses single return point concept becomes huge.It is better not to use the concept here

Example (incorrect):

Example (correct):



C++ and the process of code review

One of the big differences between C++ and Java or .Net Framework for example is the way you write your code. .Net Framework or Java have it’s own code style, that is a part of a language and everybody who writes code must obey this code style. For example, exceptions is a must, OOP programming is a must.

C++ is more flexible and allows programmer to write code anyway he wants. If it is a low-level API, is can use function return values instead of exceptions and if you prefer to use functional programming over OOP – it’s your choice. If you want you can still use macroprogramming, there is #define.

The con of this design is that still one project limit this possibilities to write code. For example, in C++ you can return errors using return codes and using exceptions, but in our low-level project we must not use exceptions. But, before committing the code, the “eye” review by the other developer must be done. You see, what is prohibited on the level of language in Java and .Net Framework is allowed in C++ but this leads to more reviewing time.

Here is example of C++ code review in our project

This code will not pass code review. The correct code is:

This is just one example of code style in one separate project can differs from code style in another project. So, every developer should spend some time understanding project coding rules. In .Net Framework it is also true, of course, but returning error by return value is prohibited somehow on the level of language, not on the level of project, so every developer (who knows .Net Framework) knows about the rule.
The problem is that the example I showed you is easy and simple. But there are examples of the code where it is not so easy to say if we should allow something or deny it. For example, macroses are generally prohibited in our code (as well as in all c++), but if you are familiar with C++, there are places where it is easier (and more beautiful) to use macros instead of another solution (function, template or whatever).

Pros and cons of low level languages

Some time ago I’ve talked about DST problem with one of the Java developers and the first question he asked to me was: “Hey, bullshit, I guess you don’t even have a single class for the string”. I think the difference between C++ and Java can be put down to topic “high and low level languages”

The C++ was initially designed this way – the language is just an OO wrapper over plain C, nothing more. The initial design was to leave the language as simple as it is, and move all concepts implementations to libraries. That’s we don’t have string data type, like in .NET Framework or Java. That’s why we don’t have an operator to double the value or to get square root from value. That’s why the C++ without libraries is just nothing.

Of course we have STL – standard template library that is a part of C++ and it has std::string class (for ANSI strings) and std::wstring (for Unicode strings) and 99.9% that you need to use this class to work with strings. But, even talking about time concept, let’s look what we have in C++.

name description since
time_t ISO time standard seconds since 1970-Jan-01
FILETIME Windows time standard Ticks (1 tick = 100 nanoseconds) since 1601-Jan-01
SYSTEMTIME Windows time standard structure with date, time year and so on
Tm ISO time structure structure with date, time year and so on
UDate ICU time milliseconds since 1970-Jan-01

Moreover, we have Mac Os time format, Java time format and so on… It is common that a third library invents its own standard for time concert, and if you use the library in the development, you need to write conversion routines. At this point you need to cry out: “Oh, C++ is horrible, every guy is inventing its own bicycle”


But this is not always bad as you can think and this is one of the reasons why C++ is so popular for more than 20 years. Let’s imagine string was added as a data type to the C++ in the beginning of 80’s. That was the time when nobody was thinking about localization and globalization, and I guess that would be ANSI string. Then, with the invention of Unicode this string (that is basic data type) becomes obsolete and need to be banned. That’s shit. Most things are subject to change, and they implementation must be in a library, not in language.

Another reason why do we have so many time concepts is looking for trade-off between memory and usability. Java is a new modern language, and, when one is writing Java application, he will not use the same memory optimizations as C++ developer does, even now. I just want to notice, that Java date is 64-bit data type, and C ISO time is 32-bit data type. Sometimes, even if the initial design of the library is perfect, it can become obsolete. Java is young language and I think (I am not sure, haven’t tried it actually) it does not have many obsolete classes, but is it just the matter of time.

Also, you need to node that even a string can’t be a common concept if we are talking about lowlevel programming. Everybody needs his own tradeoffs between speed, memory and easy of usage. For example, the Windows core team does not use std::vector but it’s own DynArray class.


The cons of this concept are, yes, writing too many bicycles. Almost every huge project has it’s own implementation of even basic concepts. One of the popular mistakes in our company (at least for the newbie) is not using the primitives, developed mostly by our developers.

For example, one can write

But this code will not complete the process of review, because we need to use special Guid class.

For example, the .Net developer will no doubt use Guid class from the .Net Framework class library.

He knows it because it is a well documented part of .Net Framework. Because it is a part of standard, the time to understand the code, written by another developer, decrease.

I need to note that C++ has boost library that is somehow not part of C++, but what is going to be part of C++ (at least it is discussed now). The boost sometimes fixes the problem with inventing bicycles.

Remote debugging for Mac Os X (part 3)

As you probably know, Mac Os X is based on FreeBSD Unix, so the methods of debugging are pretty the same as for Linux. We need to mount the sources and symbol table and then set correct way to them.

However the process of mounts differs slightly from Linux. The Mac Os X has GUI interface to do all the mounting. Switch to Finder and press Command+K to open connection dialog.


Then choose the folder to mount (folder with source codes)

It is also possible to mount folder automatically on logon. To do this, open System Preferences, click Accounts icon, choose Login Items tab and choose folder you have mounted.


Now, about Mac Os X gdb. Mac Os X gdb is hacked version of Unix gdb. Because of the hacks, it does not understand some gdb commands, including PATH command. It means, that creating .gdbinit file will not help. The only workaround for this case is to create the folder structure with the “/” root. To do this, type the following

Now you are ready to do all the debugging staff using gdb.


Speaking about GUI, you can use XCode, that is installed with Mac Os X (I will not go into details, the interface is somehow intuitive) or, alternatively, you can use Netbeans (if you are used to Linux)