Skip to main content

C++: Counting Function Calls

How many function calls are involved in executing this piece of C++ (from a QT project):
* Given a QString, safely escape it properly for sh. For example, given
* $`"\a\" return \$\`\"\a\\".
ConfIO::writeString(const QString s)
QString ret;

for (int i = 0; i < s.length(); i++)
QChar c = s[i];

if (c == '$' || c == '`' || c == '"' || c == '\\')
ret += '\\';
ret += c;

return ret;
If you don't count any function calls made by .length(), etc., I've counted
21 so far!


Will Moffat said…
Hmm, my C++ is really rusty.
* 2 calls to instantiate ret and c
* 3 operator= or +=
* 4 operator==
* 1 operator[]
So I only get 10, what am I missing?
Doug Napoleone said…
I count even less with a proper compile. Compilers these days are very good at inlining. Though QT is not known for using the best compiler options, nor does it do profile guided optimizations.

In short, don't bother trying to count the function calls you think you see. Count the ones which are actually there with proper profiling systems.

Unfortunately g++ makes this harder than it should as it inserts the _penter and _pexit calls even for inlined functions. This means you are best off using the intel profiler tools on linux. On windows you can have fun building your own profiler (not an easy task, but you can come up with something very powerful which is what we did at work). Work better than the intel tools IMHO.

We have custom string and array classes/templates, but the compiler turns every function into an inlined block for release builds (including the constructor on windows, something g++ does not seem to do, but icl (intels compiler) does. Enabling SSE2 instructions also helps out quite a bit (means the emitted ASM will not run on chips w/o SSE2).

As a result our compiled int8 array (same as char array in the end) can do array/vector math performing 16 operations at a time, as that is the SSE2 ASM which is generated, inlined, w/o writing any specialized code to do such (which we had at one point). The compiler is now smart enough to do it for us just by looking at the 'for(int i=0; i<o.numElem(); i++)...' code and most of the time do it better. This frees up the developers to work on the real hard problems.

Granted it took some time to develop our tools and tests so that we could properly determine when and where the compiler was either helping or hurting us, and understanding the differences between the compilers we are using.
Having a proper test framework with a proper profiling/timing system is crucial to any project; no matter the language.
jjinux said…
Will, I wrote that years ago, and I can't remember now ;)

Doug, great comment! It seems every year I learn even more about how pathetically little I know about C++.
Unknown said…
With no inlining this is what I see:

2x Explicit Constructors for ret and c
1x for length
1x for s[i]
6x Implicit Constructors (s[i], '$', ...)
3x operator=, operator+= (usually QChar c = ... gets turned into a constructor call, saving a call)
4x operator==

Its fun playing similar games with the number of code paths due to short circuit evaluation and exceptions.

Popular posts from this blog

Ubuntu 20.04 on a 2015 15" MacBook Pro

I decided to give Ubuntu 20.04 a try on my 2015 15" MacBook Pro. I didn't actually install it; I just live booted from a USB thumb drive which was enough to try out everything I wanted. In summary, it's not perfect, and issues with my camera would prevent me from switching, but given the right hardware, I think it's a really viable option. The first thing I wanted to try was what would happen if I plugged in a non-HiDPI screen given that my laptop has a HiDPI screen. Without sub-pixel scaling, whatever scale rate I picked for one screen would apply to the other. However, once I turned on sub-pixel scaling, I was able to pick different scale rates for the internal and external displays. That looked ok. I tried plugging in and unplugging multiple times, and it didn't crash. I doubt it'd work with my Thunderbolt display at work, but it worked fine for my HDMI displays at home. I even plugged it into my TV, and it stuck to the 100% scaling I picked for the othe

ERNOS: Erlang Networked Operating System

I've been reading Dreaming in Code lately, and I really like it. If you're not a dreamer, you may safely skip the rest of this post ;) In Chapter 10, "Engineers and Artists", Alan Kay, John Backus, and Jaron Lanier really got me thinking. I've also been thinking a lot about Minix 3 , Erlang , and the original Lisp machine . The ideas are beginning to synthesize into something cohesive--more than just the sum of their parts. Now, I'm sure that many of these ideas have already been envisioned within , LLVM , Microsoft's Singularity project, or in some other place that I haven't managed to discover or fully read, but I'm going to blog them anyway. Rather than wax philosophical, let me just dump out some ideas: Start with Minix 3. It's a new microkernel, and it's meant for real use, unlike the original Minix. "This new OS is extremely small, with the part that runs in kernel mode under 4000 lines of executable code.&quo

Haskell or Erlang?

I've coded in both Erlang and Haskell. Erlang is practical, efficient, and useful. It's got a wonderful niche in the distributed world, and it has some real success stories such as CouchDB and Haskell is elegant and beautiful. It's been successful in various programming language competitions. I have some experience in both, but I'm thinking it's time to really commit to learning one of them on a professional level. They both have good books out now, and it's probably time I read one of those books cover to cover. My question is which? Back in 2000, Perl had established a real niche for systems administration, CGI, and text processing. The syntax wasn't exactly beautiful (unless you're into that sort of thing), but it was popular and mature. Python hadn't really become popular, nor did it really have a strong niche (at least as far as I could see). I went with Python because of its elegance, but since then, I've coded both p