Bjarne Stroustrup's C++ Style and Technique FAQ

Bjarne Stroustrup's C++ Style and Technique FAQ

Modified October 4, 2009

Source: http://www2.research.att.com/~bs/bs_faq2.html#void-main

==================================================================================

These are questions about C++ Style and Technique that people ask me often. If you have better questions or comments on the answers, feel free to email me ([email protected]). Please remember that I can't spend all of my time improving my homepages.

For more general questions, see my general FAQ.

For terminology and concepts, see my C++ glossary.

Please note that these are just a collection of questions and answers. They are not a substitute for a carefully selected sequence of examples and explanations as you would find in a good textbook. Nor do they offer detailed and precise specifications as you would find in a reference manual or the standard. See The Design and Evolution of C++ for questions related to the design of C++. See The C++ Programming Language for questions about the use of C++ and its standard library.

Here is a Chinese translation of some of this Q&A with annotations

Link Source: http://www2.research.att.com/~bs/bstechfaq.htm (Encoding to GB2312)


Topics:
Getting started
Classes
Hierarchy
Templates and generic programming
Memory
Exceptions
Other language features
Trivia and style

Getting started:
How do I write this very simple program?
Can you recommend a coding standard?
How do I read a string from input?
How do I convert an integer to a string?
Classes:
How are C++ objects laid out in memory?
Why is "this" not a reference?
Why is the size of an empty class not zero?
How do I define an in-class constant?
Why isn't the destructor called at the end of scope?
Does "friend" violate encapsulation?
Why doesn't my constructor work right?
Class hierarchies:
Why do my compiles take so long?
Why do I have to put the data in my class declarations?
Why are member functions not virtual by default?
Why don't we have virtual constructors?
Why are destructors not virtual by default?
What is a pure virtual function?
Why doesn't C++ have a final keyword?
Can I call a virtual function from a constructor?
Can I stop people deriving from my class?
Why doesn't C++ have a universal class Object?
Do we really need multiple inheritance?
Why doesn't overloading work for derived classes?
Can I use "new" just as in Java?
Templates and generic programming:
Why can't I define constraints for my template parameters?
Why can't I assign a vector<Apple> to a vector<Fruit>?
Is "generics" what templates should have been?
why use sort() when we have "good old qsort()"?
What is a function object?
What is an auto_ptr and why isn't there an auto_array?
Why doesn't C++ provide heterogenous containers?
Why are the standard containers so slow?
Memory:
How do I deal with memory leaks?
Why doesn't C++ have an equivalent to realloc()?
What is the difference between new and malloc()?
Can I mix C-style and C++ style allocation and deallocation?
Why must I use a cast to convert from void*?
Is there a "placement delete"?
Why doesn't delete zero out its operand?
What's wrong with arrays?
Exceptions:
Why use exceptions?
How do I use exceptions?
Why can't I resume after catching an exception?
Why doesn't C++ provide a "finally" construct?
Can I throw an exception from a constructor? From a destructor?
What shouldn't I use exceptions for?
Other language features:
Can I write "void main()"?
Why can't I overload dot, ::, sizeof, etc.?
Can I define my own operators?
How do I call a C function from C++?
How do I call a C++ function from C?
Why does C++ have both pointers and references?
Should I use NULL or 0?
What's the value of i++ + i++?
Why are some things left undefined in C++?
What good is static_cast?
So, what's wrong with using macros?
Trivia and style:
How do you pronounce "cout"?
How do you pronounce "char"?
Is ``int* p;'' right or is ``int *p;'' right?
Which layout style is best for my code?
How do you name variables? Do you recommend "Hungarian"?
Should I use call-by-value or call-by-reference?
Should I put "const" before or after the type?
How do I write this very simple program?
Often, especially at the start of semesters, I get a lot of questions about how to write very simple programs. Typically, the problem to be solved is to read in a few numbers, do something with them, and write out an answer. Here is a sample program that does that:
#include<iostream>
#include<vector>
#include<algorithm>
using namespace std;

int main()
{
vector<double> v;

double d;
while(cin>>d) v.push_back(d); // read elements
if (!cin.eof()) { // check if input failed
cerr << "format error/n";
return 1; // error return
}

cout << "read " << v.size() << " elements/n";

reverse(v.begin(),v.end());
cout << "elements in reverse order:/n";
for (int i = 0; i<v.size(); ++i) cout << v[i] << '/n';

return 0; // success return
}
Here are a few observations about this program:
This is a Standard ISO C++ program using the standard library. Standard library facilities are declared in namespace std in headers without a .h suffix.
If you want to compile this on a Windows machine, you need to compile it as a "console application". Remember to give your source file the .cpp suffix or the compiler might think that it is C (not C++) source.
Yes, main() returns an int.
Reading into a standard vector guarantees that you don't overflow some arbitrary buffer. Reading into an array without making a "silly error" is beyond the ability of complete novices - by the time you get that right, you are no longer a complete novice. If you doubt this claim, I suggest you read my paper "Learning Standard C++ as a New Language", which you can download from my publications list.
The !cin.eof() is a test of the stream's format. Specifically, it tests whether the loop ended by finding end-of-file (if not, you didn't get input of the expected type/format). For more information, look up "stream state" in your C++ textbook.
A vector knows its size, so I don't have to count elements.
Yes, I know that I could declare i to be a vector<double>::size_type rather than plain int to quiet warnings from some hyper-suspicious compilers, but in this case,I consider that too pedantic and distracting.
This program contains no explicit memory management, and it does not leak memory. A vector keeps track of the memory it uses to store its elements. When a vector needs more memory for elements, it allocates more; when a vector goes out of scope, it frees that memory. Therefore, the user need not be concerned with the allocation and deallocation of memory for vector elements.
for reading in strings, see How do I read a string from input?.
The program ends reading input when it sees "end of file". If you run the program from the keybord on a Unix machine "end of file" is Ctrl-D. If you are on a Windows machine that because of a bug doesn't recognize an end-of-file character, you might prefer this slightly more complicated version of the program that terminates input with the word "end":
#include<iostream>
#include<vector>
#include<algorithm>
#include<string>
using namespace std;

int main()
{
vector<double> v;

double d;
while(cin>>d) v.push_back(d); // read elements
if (!cin.eof()) { // check if input failed
cin.clear(); // clear error state
string s;
cin >> s; // look for terminator string
if (s != "end") {
cerr << "format error/n";
return 1; // error return
}
}

cout << "read " << v.size() << " elements/n";

reverse(v.begin(),v.end());
cout << "elements in reverse order:/n";
for (int i = 0; i<v.size(); ++i) cout << v[i] << '/n';

return 0; // success return
}
For more examples of how to use the standard library to do simple things simply, see the "Tour of the Standard Library" Chapter of TC++PL3 (available for download).
Can you recommend a coding standard?
The main point of a C++ coding standard is to provide a set of rules for using C++ for a particular purpose in a particular environment. It follows that there cannot be one coding standard for all uses and all users. For a given application (or company, application area, etc.), a good coding standard is better than no coding standard. On the other hand, I have seen many examples that demonstrate that a bad coding standard is worse than no coding standard.

Please choose your rules with care and with solid knowledge of your application area. Some of the worst coding standards (I won't mention names "to protect the guilty") were written by people without solid knowledge of C++ together with a relative ignorance of the application area (they were "experts" rather than developers) and a misguided conviction that more restrictions are necessarily better than fewer. The counter example to that last misconception is that some features exist to help programmers having to use even worse features. Anyway, please remember that safety, productivity, etc. is the sum of all parts of the design and development process - and not of individual language features, or even of whole languages.

With those caveats, I recommend three things:
Look at Sutter and Alexandrescu: "C++ coding standards". Addison-Wesley, ISBN 0-321-11358-. It has good rules, but look upon it primarily as a set of meta-rules. That is, consider it a guide to what a good, more specific, set of coding rules should look like. If you are writing a coding standard, you ignore this book at your peril.
Look at the JSF air vehicle C++ coding standards. I consider it a pretty good set of rules for safety critical and performance critical code. If you do embedded systems programming, you should consider it. Caveat: I had a hand in the formulation of these rules, so you could consider me biased. On the other hand, please send me constructive comments about it. Such comments might lead to improvements - all good standards are regularly reviewed and updated based on experience and on changes in the work environment. If you don't build hard-real time systems or safety critical systems, you'll find these rules overly restrictive - because then those rules are not for you (at least not all of those rules).
Don't use C coding standards (even if slightly modified for C++) and don't use ten-year-old C++ coding standards (even if good for their time). C++ isn't (just) C and Standard C++ is not (just) pre-standard C++.
Why do my compiles take so long?
You may have a problem with your compiler. It may be old, you may have it installed wrongly, or your computer might be an antique. I can't help you with such problems.

However, it is more likely that the program that you are trying to compile is poorly designed, so that compiling it involves the compiler examining hundreds of header files and tens of thousands of lines of code. In principle, this can be avoided. If this problem is in your library vendor's design, there isn't much you can do (except changing to a better library/vendor), but you can structure your own code to minimize re-compilation after changes. Designs that do that are typically better, more maintainable, designs because they exhibit better separation of concerns.

Consider a classical example of an object-oriented program:
class Shape {
public: // interface to users of Shapes
virtual void draw() const;
virtual void rotate(int degrees);
// ...
protected: // common data (for implementers of Shapes)
Point center;
Color col;
// ...
};

class Circle : public Shape {
public:
void draw() const;
void rotate(int) { }
// ...
protected:
int radius;
// ...
};

class Triangle : public Shape {
public:
void draw() const;
void rotate(int);
// ...
protected:
Point a, b, c;
// ...
};
The idea is that users manipulate shapes through Shape's public interface, and that implementers of derived classes (such as Circle and Triangle) share aspects of the implementation represented by the protected members.

There are three serious problems with this apparently simple idea:

It is not easy to define shared aspects of the implementation that are helpful to all derived classes. For that reason, the set of protected members is likely to need changes far more often than the public interface. For example, even though "center" is arguably a valid concept for all Shapes, it is a nuisance to have to maintain a point "center" for a Triangle - for triangles, it makes more sense to calculate the center if and only if someone expresses interest in it.
The protected members are likely to depend on "implementation" details that the users of Shapes would rather not have to depend on. For example, many (most?) code using a Shape will be logically independent of the definition of "Color", yet the presence of Color in the definition of Shape will probably require compilation of header files defining the operating system's notion of color.
When something in the protected part changes, users of Shape have to recompile - even though only implementers of derived classes have access to the protected members.

Thus, the presence of "information helpful to implementers" in the base class that also acts as the interface to users is the source of instability in the implementation, spurious recompilation of user code (when implementation information changes), and excess inclusion of header files into user code (because the "information helpful to implementers" needs those headers). This is sometimes known as the "brittle base class problem."

The obvious solution is to omit the "information helpful to implemeters" for classes that are used as interfaces to users. That is, to make interfaces, pure interfaces. That is, to represent interfaces as abstract classes:
class Shape {
public: // interface to users of Shapes
virtual void draw() const = 0;
virtual void rotate(int degrees) = 0;
virtual Point center() const = 0;
// ...

// no data
};

class Circle : public Shape {
public:
void draw() const;
void rotate(int) { }
Point center() const { return cent; }
// ...
protected:
Point cent;
Color col;
int radius;
// ...
};

class Triangle : public Shape {
public:
void draw() const;
void rotate(int);
Point center() const;
// ...
protected:
Color col;
Point a, b, c;
// ...
};
The users are now insulated from changes to implementations of derived classes. I have seen this technique decrease build times by orders of magnitudes.

But what if there really is some information that is common to all derived classes (or simply to several derived classes)? Simply make that information a class and derive the implementation classes from that also:
class Shape {
public: // interface to users of Shapes
virtual void draw() const = 0;
virtual void rotate(int degrees) = 0;
virtual Point center() const = 0;
// ...

// no data
};

struct Common {
Color col;
// ...
};

class Circle : public Shape, protected Common {
public:
void draw() const;
void rotate(int) { }
Point center() const { return cent; }
// ...
protected:
Point cent;
int radius;
};

class Triangle : public Shape, protected Common {
public:
void draw() const;
void rotate(int);
Point center() const;
// ...
protected:
Point a, b, c;
};

Why is the size of an empty class not zero?
To ensure that the addresses of two different objects will be different. For the same reason, "new" always returns pointers to distinct objects. Consider:
class Empty { };

void f()
{
Empty a, b;
if (&a == &b) cout << "impossible: report error to compiler supplier";

Empty* p1 = new Empty;
Empty* p2 = new Empty;
if (p1 == p2) cout << "impossible: report error to compiler supplier";
}
There is an interesting rule that says that an empty base class need not be represented by a separate byte:
struct X : Empty {
int a;
// ...
};

void f(X* p)
{
void* p1 = p;
void* p2 = &p->a;
if (p1 == p2) cout << "nice: good optimizer";
}
This optimization is safe and can be most useful. It allows a programmer to use empty classes to represent very simple concepts without overhead. Some current compilers provide this "empty base class optimization".
Why do I have to put the data in my class declarations?
You don't. If you don't want data in an interface, don't put it in the class that defines the interface. Put it in derived classes instead. See, Why do my compiles take so long?.

Sometimes, you do want to have representation data in a class. Consider class complex:
template<class Scalar> class complex {
public:
complex() : re(0), im(0) { }
complex(Scalar r) : re(r), im(0) { }
complex(Scalar r, Scalar i) : re(r), im(i) { }
// ...

complex& operator+=(const complex& a)
{ re+=a.re; im+=a.im; return *this; }
// ...
private:
Scalar re, im;
};
This type is designed to be used much as a built-in type and the representation is needed in the declaration to make it possible to create genuinely local objects (i.e. objects that are allocated on the stack and not on a heap) and to ensure proper inlining of simple operations. Genuinely local objects and inlining is necessary to get the performance of complex close to what is provided in languages with a built-in complex type.

Why are member functions not virtual by default?
Because many classes are not designed to be used as base classes. For example, see class complex.

Also, objects of a class with a virtual function require space needed by the virtual function call mechanism - typically one word per object. This overhead can be significant, and can get in the way of layout compatibility with data from other languages (e.g. C and Fortran).

See The Design and Evolution of C++ for more design rationale.

Why are destructors not virtual by default?
Because many classes are not designed to be used as base classes. Virtual functions make sense only in classes meant to act as interfaces to objects of derived classes (typically allocated on a heap and accessed through pointers or references).

So when should I declare a destructor virtual? Whenever the class has at least one virtual function. Having virtual functions indicate that a class is meant to act as an interface to derived classes, and when it is, an object of a derived class may be destroyed through a pointer to the base. For example:
class Base {
// ...
virtual ~Base();
};

class Derived : public Base {
// ...
~Derived();
};

void f()
{
Base* p = new Derived;
delete p; // virtual destructor used to ensure that ~Derived is called
}
Had Base's destructor not been virtual, Derived's destructor would not have been called - with likely bad effects, such as resources owned by Derived not being freed.

Why don't we have virtual constructors?
A virtual call is a mechanism to get work done given partial information. In particular, "virtual" allows us to call a function knowing only an interfaces and not the exact type of the object. To create an object you need complete information. In particular, you need to know the exact type of what you want to create. Consequently, a "call to a constructor" cannot be virtual.

Techniques for using an indirection when you ask to create an object are often referred to as "Virtual constructors". For example, see TC++PL3 15.6.2.

For example, here is a technique for generating an object of an appropriate type using an abstract class:
struct F { // interface to object creation functions
virtual A* make_an_A() const = 0;
virtual B* make_a_B() const = 0;
};

void user(const F& fac)
{
A* p = fac.make_an_A(); // make an A of the appropriate type
B* q = fac.make_a_B(); // make a B of the appropriate type
// ...
}

struct FX : F {
A* make_an_A() const { return new AX(); } // AX is derived from A
B* make_a_B() const { return new BX(); } // BX is derived from B
};

struct FY : F {
A* make_an_A() const { return new AY(); } // AY is derived from A
B* make_a_B() const { return new BY(); } // BY is derived from B
};

int main()
{
FX x;
FY y;
user(x); // this user makes AXs and BXs
user(y); // this user makes AYs and BYs

user(FX()); // this user makes AXs and BXs
user(FY()); // this user makes AYs and BYs
// ...
}
This is a variant of what is often called "the factory pattern". The point is that user() is completely isolated from knowledge of classes such as AX and AY.
What is a pure virtual function?
A pure virtual function is a function that must be overridden in a derived class and need not be defined. A virtual function is declared to be "pure" using the curious "=0" syntax. For example:
class Base {
public:
void f1(); // not virtual
virtual void f2(); // virtual, not pure
virtual void f3() = 0; // pure virtual
};

Base b; // error: pure virtual f3 not overridden
Here, Base is an abstract class (because it has a pure virtual function), so no objects of class Base can be directly created: Base is (explicitly) meant to be a base class. For example:
class Derived : public Base {
// no f1: fine
// no f2: fine, we inherit Base::f2
void f3();
};

Derived d; // ok: Derived::f3 overrides Base::f3
Abstract classes are immensely useful for defining interfaces. In fact, a class with only pure virtual functions is often called an interface.

You can define a pure virtual function:
Base::f3() { /* ... */ }
This is very occasionally useful (to provide some simple common implementation detail for derived classes), but Base::f3() must still be overridden in some derived class.

If you don't override a pure virtual function in a derived class, that derived class becomes abstract:
class D2 : public Base {
// no f1: fine
// no f2: fine, we inherit Base::f2
// no f3: fine, but D2 is therefore still abstract
};

D2 d; // error: pure virtual Base::f3 not overridden
Why doesn't overloading work for derived classes?
That question (in many variations) are usually prompted by an example like this:
#include<iostream>
using namespace std;

class B {
public:
int f(int i) { cout << "f(int): "; return i+1; }
// ...
};

class D : public B {
public:
double f(double d) { cout << "f(double): "; return d+1.3; }
// ...
};

int main()
{
D* pd = new D;

cout << pd->f(2) << '/n';
cout << pd->f(2.3) << '/n';
}
which will produce:
f(double): 3.3
f(double): 3.6
rather than the
f(int): 3
f(double): 3.6
that some people (wrongly) guessed.

In other words, there is no overload resolution between D and B. The compiler looks into the scope of D, finds the single function "double f(double)" and calls it. It never bothers with the (enclosing) scope of B. In C++, there is no overloading across scopes - derived class scopes are not an exception to this general rule. (See D&E or TC++PL3 for details).

But what if I want to create an overload set of all my f() functions from my base and derived class? That's easily done using a using-declaration:
class D : public B {
public:
using B::f; // make every f from B available
double f(double d) { cout << "f(double): "; return d+1.3; }
// ...
};
Give that modification, the output will be
f(int): 3
f(double): 3.6
That is, overload resolution was applied to B's f() and D's f() to select the most appropriate f() to call.

Can I use "new" just as in Java?

Sort of, but don't do it blindly and there are often superior alternatives. Consider:
void compute(cmplx z, double d)
{
cmplx z2 = z+d; // c++ style
z2 = f(z2); // use z2

cmplx& z3 = *new cmplx(z+d); // Java style (assuming Java could overload +)
z3 = f(z3);
delete &z3;
}
The clumbsy use of "new" for z3 is unnecessary and slow compared with the idiomatic use of a local variable (z2). You don't need to use "new" to create an object if you also "delete" that object in the same scope; such an objectshould be a local variable.
Can I call a virtual function from a constructor?
Yes, but be careful. It may not do what you expect. In a constructor, the virtual call mechanism is disabled because overriding from derived classes hasn't yet happened. Objects are constructed from the base up, "base before derived".

Consider:
#include<string>
#include<iostream>
using namespace std;

class B {
public:
B(const string& ss) { cout << "B constructor/n"; f(ss); }
virtual void f(const string&) { cout << "B::f/n";}
};

class D : public B {
public:
D(const string & ss) :B(ss) { cout << "D constructor/n";}
void f(const string& ss) { cout << "D::f/n"; s = ss; }
private:
string s;
};

int main()
{
D d("Hello");
}
the program compiles and produce
B constructor
B::f
D constructor
Note not D::f. Consider what would happen if the rule were different so that D::f() was called from B::B(): Because the constructor D::D() hadn't yet been run, D::f() would try to assign its argument to an uninitialized string s. The result would most likely be an immediate crash.

Destruction is done "derived class before base class", so virtual functions behave as in constructors: Only the local definitions are used - and no calls are made to overriding functions to avoid touching the (now destroyed) derived class part of the object.

For more details see D&E 13.2.4.2 or TC++PL3 15.4.3.

It has been suggested that this rule is an implementation artifact. It is not so. In fact, it would be noticeably easier to implement the unsafe rule of calling virtual functions from constructors exactly as from other functions. However, that would imply that no virtual function could be written to rely on invariants established by base classes. That would be a terrible mess.

Is there a "placement delete"?
No, but if you need one you can write your own.

Consider placement new used to place objects in a set of arenas
class Arena {
public:
void* allocate(size_t);
void deallocate(void*);
// ...
};

void* operator new(size_t sz, Arena& a)
{
return a.allocate(sz);
}

Arena a1(some arguments);
Arena a2(some arguments);
Given that, we can write
X* p1 = new(a1) X;
Y* p2 = new(a1) Y;
Z* p3 = new(a2) Z;
// ...
But how can we later delete those objects correctly? The reason that there is no built-in "placement delete" to match placement new is that there is no general way of assuring that it would be used correctly. Nothing in the C++ type system allows us to deduce that p1 points to an object allocated in Arena a1. A pointer to any X allocated anywhere can be assigned to p1.

However, sometimes the programmer does know, and there is a way:
template<class T> void destroy(T* p, Arena& a)
{
if (p) {
p->~T(); // explicit destructor call
a.deallocate(p);
}
}
Now, we can write:
destroy(p1,a1);
destroy(p2,a2);
destroy(p3,a3);
If an Arena keeps track of what objects it holds, you can even write destroy() to defend itself against mistakes.

It is also possible to define a matching operator new() and operator delete() pairs for a class hierarchy TC++PL(SE) 15.6. See also D&E 10.4 and TC++PL(SE) 19.4.5.

Can I stop people deriving from my class?

Yes, but why do you want to? There are two common answers:
for efficiency: to avoid my function calls being virtual
for safety: to ensure that my class is not used as a base class (for example, to be sure that I can copy objects without fear of slicing)
In my experience, the efficiency reason is usually misplaced fear. In C++, virtual function calls are so fast that their real-world use for a class designed with virtual functions does not to produce measurable run-time overheads compared to alternative solutions using ordinary function calls. Note that the virtual function call mechanism is typically used only when calling through a pointer or a reference. When calling a function directly for a named object, the virtual function class overhead is easily optimized away.

If there is a genuine need for "capping" a class hierarchy to avoid virtual function calls, one might ask why those functions are virtual in the first place. I have seen examples where performance-critical functions had been made virtual for no good reason, just because "that's the way we usually do it".

The other variant of this problem, how to prevent derivation for logical reasons, has a solution. Unfortunately, that solution is not pretty. It relies on the fact that the most derived class in a hierarchy must construct a virtual base. For example:
class Usable;

class Usable_lock {
friend class Usable;
private:
Usable_lock() {}
Usable_lock(const Usable_lock&) {}
};

class Usable : public virtual Usable_lock {
// ...
public:
Usable();
Usable(char*);
// ...
};

Usable a;

class DD : public Usable { };

DD dd; // error: DD::DD() cannot access
// Usable_lock::Usable_lock(): private member
(from D&E sec 11.4.3).

Why doesn't C++ provide heterogenous containers?
The C++ standard library provides a set of useful, statically type-safe, and efficient containers. Examples are vector, list, and map:
vector<int> vi(10);
vector<Shape*> vs;
list<string> lst;
list<double> l2
map<string,Record*> tbl;
map< Key,vector<Record*> > t2;
These containers are described in all good C++ textbooks, and should be preferred over arrays and "home cooked" containers unless there is a good reason not to.

These containers are homogeneous; that is, they hold elements of the same type. If you want a container to hold elements of several different types, you must express that either as a union or (usually much better) as a container of pointers to a polymorphic type. The classical example is:
vector<Shape*> vi; // vector of pointers to Shapes
Here, vi can hold elements of any type derived from Shape. That is, vi is homogeneous in that all its elements are Shapes (to be precise, pointers to Shapes) and heterogeneous in the sense that vi can hold elements of a wide variety of Shapes, such as Circles, Triangles, etc.

So, in a sense all containers (in every language) are homogenous because to use them there must be a common interface to all elements for users to rely on. Languages that provide containers deemed heterogenous simply provide containers of elements that all provide a standard interface. For example, Java collections provide containers of (references to) Objects and you use the (common) Object interface to discover the real type of an element.

The C++ standard library provides homogeneous containers because those are the easiest to use in the vast majority of cases, gives the best compile-time error message, and imposes no unnecessary run-time overheads.

If you need a heterogeneous container in C++, define a common interface for all the elements and make a container of those. For example:
class Io_obj { /* ... */ }; // the interface needed to take part in object I/O

vector<Io_obj*> vio; // if you want to manage the pointers directly
vector< Handle<Io_obj> > v2; // if you want a "smart pointer" to handle the objects
Don't drop to the lowest level of implementation detail unless you have to:
vector<void*> memory; // rarely needed
A good indication that you have "gone too low level" is that your code gets littered with casts.

Using an Any class, such as Boost::Any, can be an alternative in some programs:
vector<Any> v;

Why are the standard containers so slow?
They are not. Probably "compared to what?" is a more useful answer. When people complain about standard-library container performance, I usually find one of three genuine problems (or one of the many myths and red herrings):
I suffer copy overhead
I suffer slow speed for lookup tables
My hand-coded (intrusive) lists are much faster than std::list
Before trying to optimize, consider if you have a genuine performance problem. In most of cases sent to me, the performance problem is theoretical or imaginary: First measure, then optimise only if needed.

Let's look at those problems in turn. Often, a vector<X> is slower than somebody's specialized My_container<X> because My_container<X> is implemented as a container of pointers to X. The standard containers hold copies of values, and copy a value when you put it into the container. This is essentially unbeatable for small values, but can be quite unsuitable for huge objects:
vector<int> vi;
vector<Image> vim;
// ...
int i = 7;
Image im("portrait.jpg"); // initialize image from file
// ...
vi.push_back(i); // put (a copy of) i into vi
vim.push_back(im); // put (a copy of) im into vim
Now, if portrait.jpg is a couple of megabytes and Image has value semantics (i.e., copy assignment and copy construction make copies) then vim.push_back(im) will indeed be expensive. But -- as the saying goes -- if it hurts so much, just don't do it. Instead, either use a container of handles or a containers of pointers. For example, if Image had reference semantics, the code above would incur only the cost of a copy constructor call, which would be trivial compared to most image manipulation operators. If some class, say Image again, does have copy semantics for good reasons, a container of pointers is often a reasonable solution:
vector<int> vi;
vector<Image*> vim;
// ...
Image im("portrait.jpg"); // initialize image from file
// ...
vi.push_back(7); // put (a copy of) 7 into vi
vim.push_back(&im); // put (a copy of) &im into vim
Naturally, if you use pointers, you have to think about resource management, but containers of pointers can themselves be effective and cheap resource handles (often, you need a container with a destructor for deleting the "owned" objects).

The second frequently occuring genuine performance problem is the use of a map<string,X> for a large number of (string,X) pairs. Maps are fine for relatively small containers (say a few hundred or few thousand elements -- access to an element of a map of 10000 elements costs about 9 comparisons), where less-than is cheap, and where no good hash-function can be constructed. If you have lots of strings and a good hash function, use a hash table. The unordered_map from the standard committee's Technical Report is now widely available and is far better than most people's homebrew.

Sometimes, you can speed up things by using (const char*,X) pairs rather than (string,X) pairs, but remember that < doesn't do lexicographical comparison for C-style strings. Also, if X is large, you may have the copy problem also (solve it in one of the usual ways).

Intrusive lists can be really fast. However, consider whether you need a list at all: a vector is more compact and is therefore smaller and faster in many cases - even when you do inserts and erases. For example, if you logically have a list of a few integer elements, a vector is significantly faster than a list (any list). Also, intrusive lists cannot hold built-in types directly (an int does not have a link member). So, assume that you really need a list and that you can supply a link field for every element type. The standard-library list by default performs an allocation followed by a copy for each operation inserting an element (and a deallocation for each operation removing an element). For std::list with the default allocator, this can be significant. For small elements where the copy overhead is not significant, consider using an optimized allocator. Use a hand-crafted intrusive lists only where a list and the last ounce of performance is needed.

People sometimes worry about the cost of std::vector growing incrementally. I used to worry about that and used reserve() to optimize the growth. After measuring my code and repeatedly having trouble finding the performance benefits of reserve() in real programs, I stopped using it except where it is needed to avoid iterator invalidation (a rare case in my code). Again: measure before you optimize.

Does "friend" violate encapsulation?
No. It does not. "Friend" is an explicit mechanism for granting access, just like membership. You cannot (in a standard conforming program) grant yourself access to a class without modifying its source. For example:
class X {
int i;
public:
void m(); // grant X::m() access
friend void f(X&); // grant f(X&) access
// ...
};

void X::m() { i++; /* X::m() can access X::i */ }

void f(X& x) { x.i++; /* f(X&) can access X::i */ }
For a description on the C++ protection model, see D&E sec 2.10 and TC++PL sec 11.5, 15.3, and C.11.

Why doesn't my constructor work right?
This is a question that comes in many forms. Such as:
Why does the compiler copy my objects when I don't want it to?
How do I turn off copying?
How do I stop implicit conversions?
How did my int turn into a complex number?
By default a class is given a copy constructor and a copy assignment that copy all elements. For example:
struct Point {
int x,y;
Point(int xx = 0, int yy = 0) :x(xx), y(yy) { }
};

Point p1(1,2);
Point p2 = p1;
Here we get p2.x==p1.x and p2.y==p1.y. That's often exactly what you want (and essential for C compatibility), but consider:
class Handle {
private:
string name;
X* p;
public:
Handle(string n)
:name(n), p(0) { /* acquire X called "name" and let p point to it */ }
~Handle() { delete p; /* release X called "name" */ }
// ...
};

void f(const string& hh)
{
Handle h1(hh);
Handle h2 = h1; // leads to disaster!
// ...
}
Here, the default copy gives us h2.name==h1.name and h2.p==h1.p. This leads to disaster: when we exit f() the destructors for h1 and h2 are invoked and the object pointed to by h1.p and h2.p is deleted twice.

How do we avoid this? The simplest solution is to prevent copying by making the operations that copy private:
class Handle {
private:
string name;
X* p;

Handle(const Handle&); // prevent copying
Handle& operator=(const Handle&);
public:
Handle(string n)
:name(n), p(0) { /* acquire the X called "name" and let p point to it */ }
~Handle() { delete p; /* release X called "name" */ }
// ...
};

void f(const string& hh)
{
Handle h1(hh);
Handle h2 = h1; // error (reported by compiler)
// ...
}
If we need to copy, we can of course define the copy initializer and the copy assignment to provide the desired semantics.

Now return to Point. For Point the default copy semantics is fine, the problem is the constructor:
struct Point {
int x,y;
Point(int xx = 0, int yy = 0) :x(xx), y(yy) { }
};

void f(Point);

void g()
{
Point orig; // create orig with the default value (0,0)
Point p1(2); // create p1 with the default y-coordinate 0
f(2); // calls Point(2,0);
}
People provide default arguments to get the convenience used for orig and p1. Then, some are surprised by the conversion of 2 to Point(2,0) in the call of f(). A constructor taking a single argument defines a conversion. By default that's an implicit conversion. To require such a conversion to be explicit, declare the constructor explicit:
struct Point {
int x,y;
explicit Point(int xx = 0, int yy = 0) :x(xx), y(yy) { }
};

void f(Point);

void g()
{
Point orig; // create orig with the default value (0,0)
Point p1(2); // create p1 with the default y-coordinate 0
// that's an explicit call of the constructor
f(2); // error (attmpted implicit conversion)
Point p2 = 2; // error (attmpted implicit conversion)
Point p3 = Point(2); // ok (explicit conversion)
}

Why does C++ have both pointers and references?
C++ inherited pointers from C, so I couldn't remove them without causing serious compatibility problems. References are useful for several things, but the direct reason I introduced them in C++ was to support operator overloading. For example:
void f1(const complex* x, const complex* y) // without references
{
complex z = *x+*y; // ugly
// ...
}

void f2(const complex& x, const complex& y) // with references
{
complex z = x+y; // better
// ...
}
More generally, if you want to have both the functionality of pointers and the functionality of references, you need either two different types (as in C++) or two different sets of operations on a single type. For example, with a single type you need both an operation to assign to the object referred to and an operation to assign to the reference/pointer. This can be done using separate operators (as in Simula). For example:
Ref<My_type> r :- new My_type;
r := 7; // assign to object
r :- new My_type; // assign to reference
Alternatively, you could rely on type checking (overloading).For example:
Ref<My_type> r = new My_type;
r = 7; // assign to object
r = new My_type; // assign to reference

Should I use call-by-value or call-by-reference?
That depends on what you are trying to achieve:
If you want to change the object passed, call by reference or use a pointer; e.g. void f(X&); or void f(X*);
If you don't want to change the object passed and it is big, call by const reference; e.g. void f(const X&);
Otherwise, call by value; e.g. void f(X);
What do I mean by "big"? Anything larger than a couple of words.

Why would I want to change an argument? Well, often we have to, but often we have an alternative: produce a new value. Consider:
void incr1(int& x); // increment
int incr2(int x); // increment

int v = 2;
incr1(v); // v becomes 3
v = incr2(v); // v becomes 4
I think that for a reader, incr2() is easier to understand. That is, incr1() is more likely to lead to mistakes and errors. So, I'd prefer the style that returns a new value over the one that modifies a value as long as the creation and copy of a new value isn't expensive.

I do want to change the argument, should I use a pointer or should I use a reference? I don't know a strong logical reason. If passing ``not an object'' (e.g. a null pointer) is acceptable, using a pointer makes sense. My personal style is to use a pointer when I want to modify an object because in some contexts that makes it easier to spot that a modification is possible.

Note also that a call of a member function is essentially a call-by-reference on the object, so we often use member functions when we want to modify the value/state of an object.

Why is "this" not a reference?
Because "this" was introduced into C++ (really into C with Classes) before references were added. Also, I chose "this" to follow Simula usage, rather than the (later) Smalltalk use of "self".

What's wrong with arrays?

In terms of time and space, an array is just about the optimal construct for accessing a sequence of objects in memory. It is, however, also a very low level data structure with a vast potential for misuse and errors and in essentially all cases there are better alternatives. By "better" I mean easier to write, easier to read, less error prone, and as fast.

The two fundamental problems with arrays are that
an array doesn't know its own size
the name of an array converts to a pointer to its first element at the slightest provocation
Consider some examples:
void f(int a[], int s)
{
// do something with a; the size of a is s
for (int i = 0; i<s; ++i) a[i] = i;
}

int arr1[20];
int arr2[10];

void g()
{
f(arr1,20);
f(arr2,20);
}
The second call will scribble all over memory that doesn't belong to arr2. Naturally, a programmer usually get the size right, but it's extra work and ever so often someone makes the mistake. I prefer the simpler and cleaner version using the standard library vector:
void f(vector<int>& v)
{
// do something with v
for (int i = 0; i<v.size(); ++i) v[i] = i;
}

vector<int> v1(20);
vector<int> v2(10);

void g()
{
f(v1);
f(v2);
}

Since an array doesn't know its size, there can be no array assignment:
void f(int a[], int b[], int size)
{
a = b; // not array assignment
memcpy(a,b,size); // a = b
// ...
}
Again, I prefer vector:
void g(vector<int>& a, vector<int>& b, int size)
{
a = b;
// ...
}
Another advantage of vector here is that memcpy() is not going to do the right thing for elements with copy constructors, such as strings.
void f(string a[], string b[], int size)
{
a = b; // not array assignment
memcpy(a,b,size); // disaster
// ...
}

void g(vector<string>& a, vector<string>& b, int size)
{
a = b;
// ...
}

An array is of a fixed size determined at compile time:
const int S = 10;

void f(int s)
{
int a1[s]; // error
int a2[S]; // ok

// if I want to extend a2, I'll have to change to an array
// allocated on free store using malloc() and use realloc()
// ...
}
To contrast:
const int S = 10;

void g(int s)
{
vector<int> v1(s); // ok
vector<int> v2(S); // ok
v2.resize(v2.size()*2);
// ...
}
C99 allows variable array bounds for local arrays, but those VLAs have their own problems.

The way that array names "decay" into pointers is fundamental to their use in C and C++. However, array decay interact very badly with inheritance. Consider:
class Base { void fct(); /* ... */ };
class Derived { /* ... */ };

void f(Base* p, int sz)
{
for (int i=0; i<sz; ++i) p[i].fct();
}

Base ab[20];
Derived ad[20];

void g()
{
f(ab,20);
f(ad,20); // disaster!
}
In the last call, the Derived[] is treated as a Base[] and the subscripting no longer works correctly when sizeof(Derived)!=sizeof(Base) -- as will be the case in most cases of interest. If we used vectors instead, the error would be caught at compile time:
void f(vector<Base>& v)
{
for (int i=0; i<v.size(); ++i) v[i].fct();
}

vector<Base> ab(20);
vector<Derived> ad(20);

void g()
{
f(ab);
f(ad); // error: cannot convert a vector<Derived> to a vector<Base>
}
I find that an astonishing number of novice programming errors in C and C++ relate to (mis)uses of arrays.

Why doesn't C++ have a final keyword?
There didn't (and doesn't) seem to be a sufficient need.

Should I use NULL or 0?
In C++, the definition of NULL is 0, so there is only an aesthetic difference. I prefer to avoid macros, so I use 0. Another problem with NULL is that people sometimes mistakenly believe that it is different from 0 and/or not an integer. In pre-standard code, NULL was/is sometimes defined to something unsuitable and therefore had/has to be avoided. That's less common these days.

If you have to name the null pointer, call it nullptr; that's what it's going to be called in C++0x. Then, "nullptr" will be a keyword.

How are C++ objects laid out in memory?
Like C, C++ doesn't define layouts, just semantic constraints that must be met. Therefore different implementations do things differently. Unfortunately, the best explanation I know of is in a book that is otherwise outdated and doesn't describe any current C++ implementation: The Annotated C++ Reference Manual (usually called the ARM). It has diagrams of key layout examples. There is a very brief explanation in Chapter 2 of TC++PL.

Basically, C++ constructs objects simply by concatenating sub objects. Thus
struct A { int a,b; };
is represented by two ints next to each other, and
struct B : A { int c; };
is represented by an A followed by an int; that is, by three ints next to each other.

Virtual functions are typically implemented by adding a pointer (the vptr) to each object of a class with virtual functions. This pointer points to the appropriate table of functions (the vtbl). Each class has its own vtbl shared by all objects of that class.

What's the value of i++ + i++?
It's undefined. Basically, in C and C++, if you read a variable twice in an expression where you also write it, the result is undefined. Don't do that. Another example is:
v[i] = i++;
Related example:
f(v[i],i++);
Here, the result is undefined because the order of evaluation of function arguments are undefined.

Having the order of evaluation undefined is claimed to yield better performing code. Compilers could warn about such examples, which are typically subtle bugs (or potential subtle bugs). I'm disappointed that after decades, most compilers still don't warn, leaving that job to specialized, separate, and underused tools.

Why are some things left undefined in C++?

Because machines differ and because C left many things undefined. For details, including definitions of the terms "undefined", "unspecified", "implementation defined", and "well-formed"; see the ISO C++ standard. Note that the meaning of those terms differ from their definition of the ISO C standard and from some common usage. You can get wonderfully confused discussions when people don't realize that not everybody share definitions.

This is a correct, if unsatisfactory, answer. Like C, C++ is meant to exploit hardware directly and efficiently. This implies that C++ must deal with hardware entities such as bits, bytes, words, addresses, integer computations, and floating-point computations the way they are on a given machine, rather than how we might like them to be. Note that many "things" that people refer to as "undefined" are in fact "implementation defined", so that we can write perfectly specified code as long as we know which machine we are running on. Sizes of integers and the rounding behaviour of floating-point computations fall into that category.

Consider what is probably the the best known and most infamous example of undefined behavior:
int a[10];
a[100] = 0; // range error
int* p = a;
// ...
p[100] = 0; // range error (unless we gave p a better value before that assignment)
The C++ (and C) notion of array and pointer are direct representations of a machine's notion of memory and addresses, provided with no overhead. The primitive operations on pointers map directly onto machine instructions. In particular, no range checking is done. Doing range checking would impose a cost in terms of run time and code size. C was designed to outcompete assembly code for operating systems tasks, so that was a necessary decision. Also, C -- unlike C++ -- has no reasonable way of reporting a violation had a compiler decided to generate code to detect it: There are no exceptions in C. C++ followed C for reasons of compatibility and because C++ also compete directly with assembler (in OS, embedded systems, and some numeric computation areas). If you want range checking, use a suitable checked class (vector, smart pointer, string, etc.). A good compiler could catch the range error for a[100] at compile time, catching the one for p[100] is far more difficult, and in general it is impossible to catch every range error at compile time.

Other examples of undefined behavior stems from the compilation model. A compiler cannot detect an inconsistent definition of an object or a function in separately-compiled translation units. For example:
// file1.c:
struct S { int x,y; };
int f(struct S* p) { return p->x; }

// file2.c:
struct S { int y,x; }
int main()
{
struct S s;
s.x = 1;
int x = f(&s); // x!=ss.x !!
return 2;
}
Compiling file1.c and file2.c and linking the results into the same program is illegal in both C and C++. A linker could catch the inconsistent definition of S, but is not obliged to do so (and most don't). In many cases, it can be quite difficult to catch inconsistencies between separately compiled translation units. Consistent use of header files helps minimize such problems and there are some signs that linkers are improving. Note that C++ linkers do catch almost all errors related to inconsistently declared functions.

Finally, we have the apparently unnecessary and rather annoying undefined behavior of individual expressions. For example:
void out1() { cout << 1; }
void out2() { cout << 2; }

int main()
{
int i = 10;
int j = ++i + i++; // value of j unspecified
f(out1(),out2()); // prints 12 or 21
}
The value of j is unspecified to allow compilers to produce optimal code. It is claimed that the difference between what can be produced giving the compiler this freedom and requiring "ordinary left-to-right evaluation" can be significant. I'm unconvinced, but with innumerable compilers "out there" taking advantage of the freedom and some people passionately defending that freedom, a change would be difficult and could take decades to penetrate to the distant corners of the C and C++ worlds. I am disappointed that not all compilers warn against code such as ++i+i++. Similarly, the order of evaluation of arguments is unspecified.

IMO far too many "things" are left undefined, unspecified, implementation-defined, etc. However, that's easy to say and even to give examples of, but hard to fix. It should also be noted that it is not all that difficult to avoid most of the problems and produce portable code.

Why can't I define constraints for my template parameters?
Well, you can, and it's quite easy and general.

Consider:
template<class Container>
void draw_all(Container& c)
{
for_each(c.begin(),c.end(),mem_fun(&Shape::draw));
}
If there is a type error, it will be in the resolution of the fairly complicated for_each() call. For example, if the element type of the container is an int, then we get some kind of obscure error related to the for_each() call (because we can't invoke Shape::draw() for an int).

To catch such errors early, I can write:
template<class Container>
void draw_all(Container& c)
{
Shape* p = c.front(); // accept only containers of Shape*s

for_each(c.begin(),c.end(),mem_fun(&Shape::draw));
}
The initialization of the spurious variable "p" will trigger a comprehensible error message from most current compilers. Tricks like this are common in all languages and have to be developed for all novel constructs. In production code, I'd probably write something like:
template<class Container>
void draw_all(Container& c)
{
typedef typename Container::value_type T;
Can_copy<T,Shape*>(); // accept containers of only Shape*s

for_each(c.begin(),c.end(),mem_fun(&Shape::draw));
}
This makes it clear that I'm making an assertion. The Can_copy template can be defined like this:
template<class T1, class T2> struct Can_copy {
static void constraints(T1 a, T2 b) { T2 c = a; b = a; }
Can_copy() { void(*p)(T1,T2) = constraints; }
};
Can_copy checks (at compile time) that a T1 can be assigned to a T2. Can_copy<T,Shape*> checks that T is a Shape* or a pointer to a class publicly derived from Shape or a type with a user-defined conversion to Shape*. Note that the definition is close to minimal:
one line to name the constraints to be checked and the types for which to check them
one line to list the specific constraints checked (the constraints() function)
one line to provide a way to trigger the check (the constructor)

Note also that the definition has the desirable properties that
You can express constraints without declaring or copying variables, thus the writer of a constraint doesn't have to make assumptions about how a type is initialized, whether objects can be copied, destroyed, etc. (unless, of course, those are the properties being tested by the constraint)
No code is generated for a constraint using current compilers
No macros are needed to define or use constraints
Current compilers give acceptable error messages for a failed constraint, including the word "constraints" (to give the reader a clue), the name of the constraints, and the specific error that caused the failure (e.g. "cannot initialize Shape* by double*")

So why is something like Can_copy() - or something even more elegant - not in the language? D&E contains an analysis of the difficulties involved in expressing general constraints for C++. Since then, many ideas have emerged for making these constraints classes easier to write and still trigger good error messages. For example, I believe the use of a pointer to function the way I do in Can_copy originates with Alex Stepanov and Jeremy Siek. I don't think that Can_copy() is quite ready for standardization - it needs more use. Also, different forms of constraints are in use in the C++ community; there is not yet a consensus on exactly what form of constraints templates is the most effective over a wide range of uses.

However, the idea is very general, more general than language facilities that have been proposed and provided specifically for constraints checking. After all, when we write a template we have the full expressive power of C++ available. Consider:
template<class T, class B> struct Derived_from {
static void constraints(T* p) { B* pb = p; }
Derived_from() { void(*p)(T*) = constraints; }
};

template<class T1, class T2> struct Can_copy {
static void constraints(T1 a, T2 b) { T2 c = a; b = a; }
Can_copy() { void(*p)(T1,T2) = constraints; }
};

template<class T1, class T2 = T1> struct Can_compare {
static void constraints(T1 a, T2 b) { a==b; a!=b; a<b; }
Can_compare() { void(*p)(T1,T2) = constraints; }
};

template<class T1, class T2, class T3 = T1> struct Can_multiply {
static void constraints(T1 a, T2 b, T3 c) { c = a*b; }
Can_multiply() { void(*p)(T1,T2,T3) = constraints; }
};

struct B { };
struct D : B { };
struct DD : D { };
struct X { };

int main()
{
Derived_from<D,B>();
Derived_from<DD,B>();
Derived_from<X,B>();
Derived_from<int,B>();
Derived_from<X,int>();

Can_compare<int,float>();
Can_compare<X,B>();
Can_multiply<int,float>();
Can_multiply<int,float,double>();
Can_multiply<B,X>();

Can_copy<D*,B*>();
Can_copy<D,B*>();
Can_copy<int,B*>();
}

// the classical "elements must derived from Mybase*" constraint:

template<class T> class Container : Derived_from<T,Mybase> {
// ...
};
Actually, Derived_from doesn't check derivation, but conversion, but that's often a better constraint. Finding good names for constraints can be hard.

Why use sort() when we have "good old qsort()"?
To a novice,
qsort(array,asize,sizeof(elem),elem_compare);
looks pretty weird, and is harder to understand than
sort(vec.begin(),vec.end());
To an expert, the fact that sort() tends to be faster than qsort() for the same elements and the same comparison criteria is often significant. Also, sort() is generic, so that it can be used for any reasonable combination of container type, element type, and comparison criterion. For example:
struct Record {
string name;
// ...
};

struct name_compare { // compare Records using "name" as the key
bool operator()(const Record& a, const Record& b) const
{ return a.name<b.name; }
};

void f(vector<Record>& vs)
{
sort(vs.begin(), vs.end(), name_compare());
// ...
}

In addition, most people appreciate that sort() is type safe, that no casts are required to use it, and that they don't have to write a compare() function for standard types.

For a more detailed explanation, see my paper "Learning C++ as a New language", which you can download from my publications list.

The primary reason that sort() tends to outperform qsort() is that the comparison inlines better.

What is a function object?
An object that in some way behaves like a function, of course. Typically, that would mean an object of a class that defines the application operator - operator().

A function object is a more general concept than a function because a function object can have state that persist across several calls (like a static local variable) and can be initialized and examined from outside the object (unlike a static local variable). For example:
class Sum {
int val;
public:
Sum(int i) :val(i) { }
operator int() const { return val; } // extract value

int operator()(int i) { return val+=i; } // application
};

void f(vector<int> v)
{
Sum s = 0; // initial value 0
s = for_each(v.begin(), v.end(), s); // gather the sum of all elements
cout << "the sum is " << s << "/n";

// or even:
cout << "the sum is " << for_each(v.begin(), v.end(), Sum(0)) << "/n";
}
Note that a function object with an inline application operator inlines beautifully because there are no pointers involved that might confuse optimizers. To contrast: current optimizers are rarely (never?) able to inline a call through a pointer to function.

Function objects are extensively used to provide flexibility in the standard library.

How do I deal with memory leaks?
By writing code that doesn't have any. Clearly, if your code has new operations, delete operations, and pointer arithmetic all over the place, you are going to mess up somewhere and get leaks, stray pointers, etc. This is true independently of how conscientious you are with your allocations: eventually the complexity of the code will overcome the time and effort you can afford. It follows that successful techniques rely on hiding allocation and deallocation inside more manageable types. Good examples are the standard containers. They manage memory for their elements better than you could without disproportionate effort. Consider writing this without the help of string and vector:
#include<vector>
#include<string>
#include<iostream>
#include<algorithm>
using namespace std;

int main() // small program messing around with strings
{
cout << "enter some whitespace-separated words:/n";
vector<string> v;
string s;
while (cin>>s) v.push_back(s);

sort(v.begin(),v.end());

string cat;
typedef vector<string>::const_iterator Iter;
for (Iter p = v.begin(); p!=v.end(); ++p) cat += *p+"+";
cout << cat << '/n';
}
What would be your chance of getting it right the first time? And how would you know you didn't have a leak?

Note the absence of explicit memory management, macros, casts, overflow checks, explicit size limits, and pointers. By using a function object and a standard algorithm, I could have eliminated the pointer-like use of the iterator, but that seemed overkill for such a tiny program.

These techniques are not perfect and it is not always easy to use them systematically. However, they apply surprisingly widely and by reducing the number of explicit allocations and deallocations you make the remaining examples much easier to keep track of. As early as 1981, I pointed out that by reducing the number of objects that I had to keep track of explicitly from many tens of thousands to a few dozens, I had reduced the intellectual effort needed to get the program right from a Herculean task to something manageable, or even easy.

If your application area doesn't have libraries that make programming that minimizes explicit memory management easy, then the fastest way of getting your program complete and correct might be to first build such a library.

Templates and the standard libraries make this use of containers, resource handles, etc., much easier than it was even a few years ago. The use of exceptions makes it close to essential.

If you cannot handle allocation/deallocation implicitly as part of an object you need in your application anyway, you can use a resource handle to minimize the chance of a leak. Here is an example where I need to return an object allocated on the free store from a function. This is an opportunity to forget to delete that object. After all, we cannot tell just looking at pointer whether it needs to be deallocated and if so who is responsible for that. Using a resource handle, here the standard library auto_ptr, makes it clear where the responsibility lies:
#include<memory>
#include<iostream>
using namespace std;

struct S {
S() { cout << "make an S/n"; }
~S() { cout << "destroy an S/n"; }
S(const S&) { cout << "copy initialize an S/n"; }
S& operator=(const S&) { cout << "copy assign an S/n"; }
};

S* f()
{
return new S; // who is responsible for deleting this S?
};

auto_ptr<S> g()
{
return auto_ptr<S>(new S); // explicitly transfer responsibility for deleting this S
}

int main()
{
cout << "start main/n";
S* p = f();
cout << "after f() before g()/n";
// S* q = g(); // this error would be caught by the compiler
auto_ptr<S> q = g();
cout << "exit main/n";
// leaks *p
// implicitly deletes *q
}

Think about resources in general, rather than simply about memory.

If systematic application of these techniques is not possible in your environment (you have to use code from elsewhere, part of your program was written by Neanderthals, etc.), be sure to use a memory leak detector as part of your standard development procedure, or plug in a garbage collector.

Why can't I resume after catching an exception?
In other words, why doesn't C++ provide a primitive for returning to the point from which an exception was thrown and continuing execution from there?

Basically, someone resuming from an exception handler can never be sure that the code after the point of throw was written to deal with the execution just continuing as if nothing had happened. An exception handler cannot know how much context to "get right" before resuming. To get such code right, the writer of the throw and the writer of the catch need intimate knowledge of each others code and context. This creates a complicated mutual dependency that wherever it has been allowed has led to serious maintenance problems.

I seriously considered the possibility of allowing resumption when I designed the C++ exception handling mechanism and this issue was discussed in quite some detail during standardization. See the exception handling chapter of The Design and Evolution of C++.

If you want to check to see if you can fix a problem before throwing an exception, call a function that checks and then throws only if the problem cannot be dealt with locally. A new_handler is an example of this.
Why doesn't C++ have an equivalent to realloc()?
If you want to, you can of course use realloc(). However, realloc() is only guaranteed to work on arrays allocated by malloc() (and similar functions) containing objects without user-defined copy constructors. Also, please remember that contrary to naive expectations, realloc() occasionally does copy its argument array.

In C++, a better way of dealing with reallocation is to use a standard library container, such as vector, and let it grow naturally.

Why use exceptions?
What good can using exceptions do for me? The basic answer is: Using exceptions for error handling makes you code simpler, cleaner, and less likely to miss errors. But what's wrong with "good old errno and if-statements"? The basic answer is: Using those, your error handling and your normal code are closely intertwined. That way, your code gets messy and it becomes hard to ensure that you have dealt with all errors (think "spaghetti code" or a "rat's nest of tests").

First of all there are things that just can't be done right without exceptions. Consider an error detected in a constructor; how do you report the error? You throw an exception. That's the basis of RAII (Resource Acquisition Is Initialization), which it the basis of some of the most effective modern C++ design techniques: A constructor's job is to establish the invariant for the class (create the environment in which the members function are to run) and that often requires the acquisition of resources, such as memory, locks, files, sockets, etc.

Imagine that we did not have exceptions, how would you deal with an error detected in a constructor? Remember that constructors are often invoked initialize/construct objects in variables:
vector<double> v(100000); // needs to allocate memory
ofstream os("myfile"); // needs to open a file
The vector or ofstream (output file stream) constructor could either set the variable into a "bad" state (as ifstream does by default) so that every subsequent operation fails. That's not ideal. For example, in the case of ofstream, your output simply disappears if you forget to check that the open operation succeeded. For most classes that results are worse. At least, we would have to write:
vector<double> v(100000); // needs to allocate memory
if (v.bad()) { /* handle error */ } // vector doesn't actually have a bad(); it relies on exceptions
ofstream os("myfile"); // needs to open a file
if (os.bad()) { /* handle error */ }
That's an extra test per object (to write, to remember or forget). This gets really messy for classes composed of several objects, especially if those sub-objects depend on each other. For more information see The C++ Programming Language section 8.3, Chapter 14, and Appendix E or the (more academic) paper Exception safety: Concepts and techniques.

So writing constructors can be tricky without exceptions, but what about plain old functions? We can either return an error code or set a non-local variable (e.g. errno). Setting a global variable doesn't work too well unless you test it immediately (or some other function might have re-set it). Don't even think of that technique if you might have multiple threads accessing the global variable. The trouble with return values are that choosing the error return value can require cleverness and can be impossible:
double d = my_sqrt(-1); // return -1 in case of error
if (d == -1) { /* handle error */ }
int x = my_negate(MIN_INT); // Duh?
There is no possible value for my_negate() to return: Every possible int is the correct answer for some int and there is no correct answer for the most negative number in the twos-complement representation. In such cases, we would need to return pairs of values (and as usual remember to test) See my Beginning programming book for more examples and explanations.

Common objections to the use of exceptions:
but exceptions are expensive!: Not really. Modern C++ implementations reduce the overhead of using exceptions to a few percent (say, 3%) and that's compared to no error handling. Writing code with error-return codes and tests is not free either. As a rule of thumb, exception handling is extremely cheap when you don't throw an exception. It costs nothing on some implementations. All the cost is incurred when you throw an exception: that is, "normal code" is faster than code using error-return codes and tests. You incur cost only when you have an error.
but in JSF++ you yourself ban exceptions outright!: JSF++ is for hard-real time and safety-critical applications (flight control software). If a computation takes too long someone may die. For that reason, we have to guarantee response times, and we can't - with the current level of tool support - do that for exceptions. In that context, even free store allocation is banned! Actually, the JSF++ recommendations for error handling simulate the use of exceptions in anticipation of the day where we have the tools to do things right, i.e. using exceptions.
but throwing an exception from a constructor invoked by new causes a memory leak!: Nonsense! That's an old-wives' tale caused by a bug in one compiler - and that bug was immediately fixed over a decade ago.
How do I use exceptions?
See The C++ Programming Language section 8.3, Chapter 14, and Appendix E. The appendix focuses on techniques for writing exception-safe code in demanding applications, and is not written for novices.

In C++, exceptions is used to signal errors that cannot be handled locally, such as the failure to acquire a resource in a constructor. For example:
class Vector {
int sz;
int* elem;
class Range_error { };
public:
Vector(int s) : sz(s) { if (sz<0) throw Range_error(); /* ... */ }
// ...
};
Do not use exceptions as simply another way to return a value from a function. Most users assume - as the language definition encourages them to - that exception-handling code is error-handling code, and implementations are optimized to reflect that assumption.

A key technique is resource acquisiton is initialization (sometimes abbreviated to RAII), which uses classes with destructors to impose order on resource management. For example:
void fct(string s)
{
File_handle f(s,"r"); // File_handle's constructor opens the file called "s"
// use f
} // here File_handle's destructor closes the file
If the "use f" part of fct() throws an exception, the destructor is still invoked and the file is properly closed. This contrasts to the common unsafe usage:
void old_fct(const char* s)
{
FILE* f = fopen(s,"r"); // open the file named "s"
// use f
fclose(f); // close the file
}
If the "use f" part of old_fct throws an exception - or simply does a return - the file isn't closed. In C programs, longjmp() is an additional hazard.

Why can't I assign a vector<Apple*> to a vector<Fruit*>?
Because that would open a hole in the type system. For example:
class Apple : public Fruit { void apple_fct(); /* ... */ };
class Orange : public Fruit { /* ... */ }; // Orange doesn't have apple_fct()

vector<Apple*> v; // vector of Apples

void f(vector<Fruit*>& vf) // innocent Fruit manipulating function
{
vf.push_back(new Orange); // add orange to vector of fruit
}

void h()
{
f(v); // error: cannot pass a vector<Apple*> as a vector<Fruit*>
for (int i=0; i<v.size(); ++i) v[i]->apple_fct();
}
Had the call f(v) been legal, we would have had an Orange pretending to be an Apple.

An alternative language design decision would have been to allow the unsafe conversion, but rely on dynamic checking. That would have required a run-time check for each access to v's members, and h() would have had to throw an exception upon encountering the last element of v.

Why doesn't C++ have a universal class Object?
We don't need one: generic programming provides statically type safe alternatives in most cases. Other cases are handled using multiple inheritance.
There is no useful universal class: a truly universal carries no semantics of its own.
A "universal" class encourages sloppy thinking about types and interfaces and leads to excess run-time checking.
Using a universal base class implies cost: Objects must be heap-allocated to be polymorphic; that implies memory and access cost. Heap objects don't naturally support copy semantics. Heap objects don't support simple scoped behavior (which complicates resource management). A universal base class encourages use of dynamic_cast and other run-time checking.
Yes. I have simplified the arguments; this is an FAQ, not an academic paper.

Do we really need multiple inheritance?
Not really. We can do without multiple inheritance by using workarounds, exactly as we can do without single inheritance by using workarounds. We can even do without classes by using workarounds. C is a proof of that contention. However, every modern language with static type checking and inheritance provides some form of multiple inheritance. In C++, abstract classes often serve as interfaces and a class can have many interfaces. Other languages - often deemed "not MI" - simply has a separate name for their equivalent to a pure abstract class: an interface. The reason languages provide inheritance (both single and multiple) is that language-supported inheritance is typically superior to workarounds (e.g. use of forwarding functions to sub-objects or separately allocated objects) for ease of programming, for detecting logical problems, for maintainability, and often for performance.

How do I read a string from input?
You can read a single, whitespace terminated word like this:
#include<iostream>
#include<string>
using namespace std;

int main()
{
cout << "Please enter a word:/n";

string s;
cin>>s;

cout << "You entered " << s << '/n';
}
Note that there is no explicit memory management and no fixed-sized buffer that you could possibly overflow.

If you really need a whole line (and not just a single word) you can do this:
#include<iostream>
#include<string>
using namespace std;

int main()
{
cout << "Please enter a line:/n";

string s;
getline(cin,s);

cout << "You entered " << s << '/n';
}
For a brief introduction to standard library facilities, such as iostream and string, see Chaper 3 of TC++PL3 (available online). For a detailed comparison of simple uses of C and C++ I/O, see "Learning Standard C++ as a New Language", which you can download from my publications list

Is "generics" what templates should have been?
No. generics are primarily syntactic sugar for abstract classes; that is, with generics (whether Java or C# generics), you program against precisely defined interfaces and typically pay the cost of virtual function calls and/or dynamic casts to use arguments.

Templates supports generic programming, template metaprogramming, etc. through a combination of features such as integer template arguments, specialization, and uniform treatment of built-in and user-defined types. The result is flexibility, generality, and performance unmatched by "generics". The STL is the prime example.

A less desirable result of the flexibility is late detection of errors and horrendously bad error messages. This is currently being addressed indirectly with constraints classes and will be directly addressed in C++0x with concepts (see my publications, my proposals, and The standards committee's site for all proposals).

Can I throw an exception from a constructor? From a destructor?
Yes: You should throw an exception from a constructor whenever you cannot properly initialize (construct) an object. There is no really satisfactory alternative to exiting a constructor by a throw.
Not really: You can throw an exception in a destructor, but that exception must not leave the destructor; if a destructor exits by a throw, all kinds of bad things are likely to happen because the basic rules of the standard library and the language itself will be violated. Don't do it.
For examples and detailed explanations, see Appendix E of The C++ Programming Language.

There is a caveat: Exceptions can't be used for some hard-real time projects. For example, see the JSF air vehicle C++ coding standards.

Why doesn't C++ provide a "finally" construct?
Because C++ supports an alternative that is almost always better: The "resource acquisition is initialization" technique (TC++PL3 section 14.4). The basic idea is to represent a resource by a local object, so that the local object's destructor will release the resource. That way, the programmer cannot forget to release the resource. For example:
class File_handle {
FILE* p;
public:
File_handle(const char* n, const char* a)
{ p = fopen(n,a); if (p==0) throw Open_error(errno); }
File_handle(FILE* pp)
{ p = pp; if (p==0) throw Open_error(errno); }

~File_handle() { fclose(p); }

operator FILE*() { return p; }

// ...
};

void f(const char* fn)
{
File_handle f(fn,"rw"); // open fn for reading and writing
// use file through f
}
In a system, we need a "resource handle" class for each resource. However, we don't have to have an "finally" clause for each acquisition of a resource. In realistic systems, there are far more resource acquisitions than kinds of resources, so the "resource acquisition is initialization" technique leads to less code than use of a "finally" construct.

Also, have a look at the examples of resource management in Appendix E of The C++ Programming Language.

What is an auto_ptr and why isn't there an auto_array?
An auto_ptr is an example of very simple handle class, defined in <memory>, supporting exception safety using the resource acquisition is initialization technique. An auto_ptr holds a pointer, can be used as a pointer, and deletes the object pointed to at the end of its scope. For example:
#include<memory>
using namespace std;

struct X {
int m;
// ..
};

void f()
{
auto_ptr<X> p(new X);
X* q = new X;

p->m++; // use p just like a pointer
q->m++;
// ...

delete q;
}
If an exception is thrown in the ... part, the object held by p is correctly deleted by auto_ptr's destructor while the X pointed to by q is leaked. See TC++PL 14.4.2 for details.

Auto_ptr is a very lightweight class. In particular, it is *not* a reference counted pointer. If you "copy" one auto_ptr into another, the assigned to auto_ptr holds the pointer and the assigned auto_ptr holds 0. For example:
#include<memory>
#include<iostream>
using namespace std;

struct X {
int m;
// ..
};

int main()
{
auto_ptr<X> p(new X);
auto_ptr<X> q(p);
cout << "p " << p.get() << " q " << q.get() << "/n";
}
should print a 0 pointer followed by a non-0 pointer. For example:
p 0x0 q 0x378d0
auto_ptr::get() returns the held pointer.

This "move semantics" differs from the usual "copy semantics", and can be surprising. In particular, never use an auto_ptr as a member of a standard container. The standard containers require the usual copy semantics. For example:
std::vector<auto_ptr<X> >v; // error

An auto_ptr holds a pointer to an individual element, not a pointer to an array:
void f(int n)
{
auto_ptr<X> p(new X[n]); // error
// ...
}
This is an error because the destructor will delete the pointer using delete rather than delete[] and will fail to invoke the destructor for the last n-1 Xs.

So should we use an auto_array to hold arrays? No. There is no auto_array. The reason is that there isn't a need for one. A better solution is to use a vector:
void f(int n)
{
vector<X> v(n);
// ...
}
Should an exception occur in the ... part, v's destructor will be correctly invoked.

In C++0x use a Unique_ptr.

What shouldn't I use exceptions for?
C++ exceptions are designed to support error handling. Use throw only to signal an error and catch only to specify error handling actions. There are other uses of exceptions - popular in other languages - but not idiomatic in C++ and deliberately not supported well by C++ implementations (those implementations are optimized based on the assumption that exceptions are used for error handling).

In particular, throw is not simply an alternative way of returning a value from a function (similar to return). Doing so will be slow and will confuse most C++ programmers used to seing exceptions used only for error handling. Similarly, throw is not a good way of getting out of a loop.
What is the difference between new and malloc()?
malloc() is a function that takes a number (of bytes) as its argument; it returns a void* pointing to unitialized storage. new is an operator that takes a type and (optionally) a set of initializers for that type as its arguments; it returns a pointer to an (optionally) initialized object of its type. The difference is most obvious when you want to allocate an object of a user-defined type with non-trivial initialization semantics. Examples:
class Circle : public Shape {
public:
Cicle(Point c, int r);
// no default constructor
// ...
};

class X {
public:
X(); // default constructor
// ...
};

void f(int n)
{
void* p1 = malloc(40); // allocate 40 (uninitialized) bytes

int* p2 = new int[10]; // allocate 10 uninitialized ints
int* p3 = new int(10); // allocate 1 int initialized to 10
int* p4 = new int(); // allocate 1 int initialized to 0
int* p4 = new int; // allocate 1 uninitialized int

Circle* pc1 = new Circle(Point(0,0),10); // allocate a Circle constructed
// with the specified argument
Circle* pc2 = new Circle; // error no default constructor

X* px1 = new X; // allocate a default constructed X
X* px2 = new X(); // allocate a default constructed X
X* px2 = new X[10]; // allocate 10 default constructed Xs
// ...
}
Note that when you specify a initializer using the "(value)" notation, you get initialization with that value. Unfortunately, you cannot specify that for an array. Often, a vector is a better alternative to a free-store-allocated array (e.g., consider exception safety).

Whenever you use malloc() you must consider initialization and convertion of the return pointer to a proper type. You will also have to consider if you got the number of bytes right for your use. There is no performance difference between malloc() and new when you take initialization into account.

malloc() reports memory exhaustion by returning 0. new reports allocation and initialization errors by throwing exceptions.

Objects created by new are destroyed by delete. Areas of memory allocated by malloc() are deallocated by free().

Can I mix C-style and C++ style allocation and deallocation?
Yes, in the sense that you can use malloc() and new in the same program.

No, in the sense that you cannot allocate an object with malloc() and free it using delete. Nor can you allocate with new and delete with free() or use realloc() on an array allocated by new.

The C++ operators new and delete guarantee proper construction and destruction; where constructors or destructors need to be invoked, they are. The C-style functions malloc(), calloc(), free(), and realloc() doesn't ensure that. Furthermore, there is no guarantee that the mechanism used by new and delete to acquire and release raw memory is compatible with malloc() and free(). If mixing styles works on your system, you were simply "lucky" - for now.

If you feel the need for realloc() - and many do - then consider using a standard library vector. For example
// read words from input into a vector of strings:

vector<string> words;
string s;
while (cin>>s && s!=".") words.push_back(s);
The vector expands as needed.

See also the examples and discussion in "Learning Standard C++ as a New Language", which you can download from my publications list.

Why must I use a cast to convert from void*?

In C, you can implicitly convert a void* to a T*. This is unsafe. Consider:
#include<stdio.h>

int main()
{
char i = 0;
char j = 0;
char* p = &i;
void* q = p;
int* pp = q; /* unsafe, legal C, not C++ */

printf("%d %d/n",i,j);
*pp = -1; /* overwrite memory starting at &i */
printf("%d %d/n",i,j);
}
The effects of using a T* that doesn't point to a T can be disastrous. Consequently, in C++, to get a T* from a void* you need an explicit cast. For example, to get the undesirable effects of the program above, you have to write:
int* pp = (int*)q;
or, using a new style cast to make the unchecked type conversion operation more visible:
int* pp = static_cast<int*>(q);
Casts are best avoided.

One of the most common uses of this unsafe conversion in C is to assign the result of malloc() to a suitable pointer. For example:
int* p = malloc(sizeof(int));
In C++, use the typesafe new operator:
int* p = new int;
Incidentally, the new operator offers additional advantages over malloc():
new can't accidentally allocate the wrong amount of memory,
new implicitly checks for memory exhaustion, and
new provides for initialization
For example:
typedef std::complex<double> cmplx;

/* C style: */
cmplx* p = (cmplx*)malloc(sizeof(int)); /* error: wrong size */
/* forgot to test for p==0 */
if (*p == 7) { /* ... */ } /* oops: forgot to initialize *p */

// C++ style:
cmplx* q = new cmplx(1,2); // will throw bad_alloc if memory is exhausted
if (*q == 7) { /* ... */ }

How do I define an in-class constant?

If you want a constant that you can use in a constant expression, say as an array bound, you have two choices:
class X {
static const int c1 = 7;
enum { c2 = 19 };

char v1[c1];
char v2[c2];

// ...
};
At first glance, the declaration of c1 seems cleaner, but note that to use that in-class initialization syntax, the constant must be a static const of integral or enumeration type initialized by a constant expression. That's quite restrictive:
class Y {
const int c3 = 7; // error: not static
static int c4 = 7; // error: not const
static const float c5 = 7; // error: not integral
};
I tend to use the "enum trick" because it's portable and doesn't tempt me to use non-standard extensions of the in-class initialization syntax.

So why do these inconvenient restrictions exist? A class is typically declared in a header file and a header file is typically included into many translation units. However, to avoid complicated linker rules, C++ requires that every object has a unique definition. That rule would be broken if C++ allowed in-class definition of entities that needed to be stored in memory as objects. See D&E for an explanation of C++'s design tradeoffs.

You have more flexibility if the const isn't needed for use in a constant expression:
class Z {
static char* p; // initialize in definition
const int i; // initialize in constructor
public:
Z(int ii) :i(ii) { }
};

char* Z::p = "hello, there";
You can take the address of a static member if (and only if) it has an out-of-class definition:
class AE {
// ...
public:
static const int c6 = 7;
static const int c7 = 31;
};

const int AE::c7; // definition

int f()
{
const int* p1 = &AE::c6; // error: c6 not an lvalue
const int* p2 = &AE::c7; // ok
// ...
}

Why doesn't delete zero out its operand?
Consider
delete p;
// ...
delete p;
If the ... part doesn't touch p then the second "delete p;" is a serious error that a C++ implementation cannot effectively protect itself against (without unusual precautions). Since deleting a zero pointer is harmless by definition, a simple solution would be for "delete p;" to do a "p=0;" after it has done whatever else is required. However, C++ doesn't guarantee that.

One reason is that the operand of delete need not be an lvalue. Consider:
delete p+1;
delete f(x);
Here, the implementation of delete does not have a pointer to which it can assign zero. These examples may be rare, but they do imply that it is not possible to guarantee that ``any pointer to a deleted object is 0.'' A simpler way of bypassing that ``rule'' is to have two pointers to an object:
T* p = new T;
T* q = p;
delete p;
delete q; // ouch!
C++ explicitly allows an implementation of delete to zero out an lvalue operand, and I had hoped that implementations would do that, but that idea doesn't seem to have become popular with implementers.

If you consider zeroing out pointers important, consider using a destroy function:
template<class T> inline void destroy(T*& p) { delete p; p = 0; }

Consider this yet-another reason to minimize explicit use of new and delete by relying on standard library containers, handles, etc.

Note that passing the pointer as a reference (to allow the pointer to be zero'd out) has the added benefit of preventing destroy() from being called for an rvalue:
int* f();
int* p;
// ...
destroy(f()); // error: trying to pass an rvalue by non-const reference
destroy(p+1); // error: trying to pass an rvalue by non-const reference
Why isn't the destructor called at the end of scope?
The simple answer is "of course it is!", but have a look at the kind of example that often accompany that question:
void f()
{
X* p = new X;
// use p
}
That is, there was some (mistaken) assumption that the object created by "new" would be destroyed at the end of a function.

Basically, you should only use "new" if you want an object to live beyond the lifetime of the scope you create it in. That done, you need to use "delete" to destroy it. For example:
X* g(int i) { /* ... */ return new X(i); } // the X outlives the call of g()

void h(int i)
{
X* p = g(i);
// ...
delete p;
}
If you want an object to live in a scope only, don't use "new" but simply define a variable:
{
ClassName x;
// use x
}
The variable is implicitly destroyed at the end of the scope.

Code that creates an object using new and then deletes it at the end of the same scope is ugly, error-prone, and inefficient. For example:
void fct() // ugly, error-prone, and inefficient
{
X* p = new X;
// use p
delete p;
}
Can I write "void main()"?
The definition
void main() { /* ... */ }
is not and never has been C++, nor has it even been C. See the ISO C++ standard 3.6.1[2] or the ISO C standard 5.1.2.2.1. A conforming implementation accepts
int main() { /* ... */ }
and
int main(int argc, char* argv[]) { /* ... */ }
A conforming implementation may provide more versions of main(), but they must all have return type int. The int returned by main() is a way for a program to return a value to "the system" that invokes it. On systems that doesn't provide such a facility the return value is ignored, but that doesn't make "void main()" legal C++ or legal C. Even if your compiler accepts "void main()" avoid it, or risk being considered ignorant by C and C++ programmers.

In C++, main() need not contain an explicit return statement. In that case, the value returned is 0, meaning successful execution. For example:
#include<iostream>

int main()
{
std::cout << "This program returns the integer value 0/n";
}
Note also that neither ISO C++ nor C99 allows you to leave the type out of a declaration. That is, in contrast to C89 and ARM C++ ,"int" is not assumed where a type is missing in a declaration. Consequently:
#include<iostream>

main() { /* ... */ }
is an error because the return type of main() is missing.

Why can't I overload dot, ::, sizeof, etc.?
Most operators can be overloaded by a programmer. The exceptions are
. (dot) :: ?: sizeof
There is no fundamental reason to disallow overloading of ?:. I just didn't see the need to introduce the special case of overloading a ternary operator. Note that a function overloading expr1?expr2:expr3 would not be able to guarantee that only one of expr2 and expr3 was executed.

Sizeof cannot be overloaded because built-in operations, such as incrementing a pointer into an array implicitly depends on it. Consider:
X a[10];
X* p = &a[3];
X* q = &a[3];
p++; // p points to a[4]
// thus the integer value of p must be
// sizeof(X) larger than the integer value of q
Thus, sizeof(X) could not be given a new and different meaning by the programmer without violating basic language rules.

In N::m neither N nor m are expressions with values; N and m are names known to the compiler and :: performs a (compile time) scope resolution rather than an expression evaluation. One could imagine allowing overloading of x::y where x is an object rather than a namespace or a class, but that would - contrary to first appearances - involve introducing new syntax (to allow expr::expr). It is not obvious what benefits such a complication would bring.

Operator . (dot) could in principle be overloaded using the same technique as used for ->. However, doing so can lead to questions about whether an operation is meant for the object overloading . or an object referred to by . For example:
class Y {
public:
void f();
// ...
};

class X { // assume that you can overload .
Y* p;
Y& operator.() { return *p; }
void f();
// ...
};

void g(X& x)
{
x.f(); // X::f or Y::f or error?
}

This problem can be solved in several ways. At the time of standardization, it was not obvious which way would be best. For more details, see D&E.

Can I define my own operators?
Sorry, no. The possibility has been considered several times, but each time I/we decided that the likely problems outweighed the likely benefits.

It's not a language-technical problem. Even when I first considerd it in 1983, I knew how it could be implemented. However, my experience has been that when we go beyond the most trivial examples people seem to have subtlely different opinions of "the obvious" meaning of uses of an operator. A classical example is a**b**c. Assume that ** has been made to mean exponentiation. Now should a**b**c mean (a**b)**c or a**(b**c)? I thought the answer was obvious and my friends agreed - and then we found that we didn't agree on which resolution was the obvious one. My conjecture is that such problems would lead to subtle bugs.

How do I convert an integer to a string?
The simplest way is to use a stringstream:
#include<iostream>
#include<string>
#include<sstream>
using namespace std;

string itos(int i) // convert int to string
{
stringstream s;
s << i;
return s.str();
}

int main()
{
int i = 127;
string ss = itos(i);
const char* p = ss.c_str();

cout << ss << " " << p << "/n";
}
Naturally, this technique works for converting any type that you can output using << to a string. For a description of string streams, see 21.5.3 of The C++ Programming Language.

How do I call a C function from C++?
Just declare the C function ``extern "C"'' (in your C++ code) and call it (from your C or C++ code). For example:
// C++ code

extern "C" void f(int); // one way

extern "C" { // another way
int g(double);
double h();
};

void code(int i, double d)
{
f(i);
int ii = g(d);
double dd = h();
// ...
}
The definitions of the functions may look like this:
/* C code: */

void f(int i)
{
/* ... */
}

int g(double d)
{
/* ... */
}

double h()
{
/* ... */
}

Note that C++ type rules, not C rules, are used. So you can't call function declared ``extern "C"'' with the wrong number of argument. For example:
// C++ code

void more_code(int i, double d)
{
double dd = h(i,d); // error: unexpected arguments
// ...
}
How do I call a C++ function from C?
Just declare the C++ function ``extern "C"'' (in your C++ code) and call it (from your C or C++ code). For example:
// C++ code:

extern "C" void f(int);

void f(int i)
{
// ...
}
Now f() can be used like this:
/* C code: */

void f(int);

void cc(int i)
{
f(i);
/* ... */
}
Naturally, this works only for non-member functions. If you want to call member functions (incl. virtual functions) from C, you need to provide a simple wrapper. For example:
// C++ code:

class C {
// ...
virtual double f(int);
};

extern "C" double call_C_f(C* p, int i) // wrapper function
{
return p->f(i);
}
Now C::f() can be used like this:
/* C code: */

double call_C_f(struct C* p, int i);

void ccc(struct C* p, int i)
{
double d = call_C_f(p,i);
/* ... */
}
If you want to call overloaded functions from C, you must provide wrappers with distinct names for the C code to use. For example:
// C++ code:

void f(int);
void f(double);

extern "C" void f_i(int i) { f(i); }
extern "C" void f_d(double d) { f(d); }
Now the f() functions can be used like this:
/* C code: */

void f_i(int);
void f_d(double);

void cccc(int i,double d)
{
f_i(i);
f_d(d);
/* ... */
}
Note that these techniques can be used to call a C++ library from C code even if you cannot (or do not want to) modify the C++ headers.
Is ``int* p;'' right or is ``int *p;'' right?
Both are "right" in the sense that both are valid C and C++ and both have exactly the same meaning. As far as the language definitions and the compilers are concerned we could just as well say ``int*p;'' or ``int * p;''

The choice between ``int* p;'' and ``int *p;'' is not about right and wrong, but about style and emphasis. C emphasized expressions; declarations were often considered little more than a necessary evil. C++, on the other hand, has a heavy emphasis on types.

A ``typical C programmer'' writes ``int *p;'' and explains it ``*p is what is the int'' emphasizing syntax, and may point to the C (and C++) declaration grammar to argue for the correctness of the style. Indeed, the * binds to the name p in the grammar.

A ``typical C++ programmer'' writes ``int* p;'' and explains it ``p is a pointer to an int'' emphasizing type. Indeed the type of p is int*. I clearly prefer that emphasis and see it as important for using the more advanced parts of C++ well.

The critical confusion comes (only) when people try to declare several pointers with a single declaration:
int* p, p1; // probable error: p1 is not an int*
Placing the * closer to the name does not make this kind of error significantly less likely.
int *p, p1; // probable error?
Declaring one name per declaration minimizes the problem - in particular when we initialize the variables. People are far less likely to write:
int* p = &i;
int p1 = p; // error: int initialized by int*
And if they do, the compiler will complain.

Whenever something can be done in two ways, someone will be confused. Whenever something is a matter of taste, discussions can drag on forever. Stick to one pointer per declaration and always initialize variables and the source of confusion disappears. See The Design and Evolution of C++ for a longer discussion of the C declaration syntax.

Which layout style is the best for my code?
Such style issues are a matter of personal taste. Often, opinions about code layout are strongly held, but probably consistency matters more than any particular style. Like most people, I'd have a hard time constructing a solid logical argument for my preferences.

I personally use what is often called "K&R" style. When you add conventions for constructs not found in C, that becomes what is sometimes called "Stroustrup" style. For example:
class C : public B {
public:
// ...
};

void f(int* p, int max)
{
if (p) {
// ...
}

for (int i = 0; i<max; ++i) {
// ...
}
}
This style conserves vertical space better than most layout styles, and I like to fit as much as is reasonable onto a screen. Placing the opening brace of a function on a new line helps me distinguish function definition from class definitions at a glance.

Indentation is very important.

Design issues, such as the use of abstract classes for major interfaces, use of templates to present flexible type-safe abstractions, and proper use of exceptions to represent errors, are far more important than the choice of layout style.

How do you name variables? Do you recommend "Hungarian"?
No I don't recommend "Hungarian". I regard "Hungarian" (embedding an abbreviated version of a type in a variable name) a technique that can be useful in untyped languages, but is completely unsuitable for a language that supports generic programming and object-oriented programming - both of which emphasize selection of operations based on the type an arguments (known to the language or to the run-time support). In this case, "building the type of an object into names" simply complicates and minimizes abstraction. To various extent, I have similar problems with every scheme that embeds information about language-technical details (e.g., scope, storage class, syntactic category) into names. I agree that in some cases, building type hints into variable names can be helpful, but in general, and especially as software evolves, this becomes a maintenance hazard and a serious detriment to good code. Avoid it as the plague.

So, I don't like naming a variable after its type; what do I like and recommend? Name a variable (function, type, whatever) based on what it is or does. Choose meaningful name; that is, choose names that will help people understand your program. Even you will have problems understanding what your program is supposed to do if you litter it with variables with easy-to-type names like x1, x2, s3, and p7. Abbreviations and acronyms can confuse people, so use them sparingly. Acronyms should be used sparingly. Consider, mtbf, TLA, myw, RTFM, and NBV. They are obvious, but wait a few months and even I will have forgotten at least one.

Short names, such as x and i, are meaningful when used conventionally; that is, x should be a local variable or a parameter and i should be a loop index.

Don't use overly long names; they are hard to type, make lines so long that they don't fit on a screen, and are hard to read quickly. These are probably ok:
partial_sum element_count staple_partition
These are probably too long:
the_number_of_elements remaining_free_slots_in_symbol_table
I prefer to use underscores to separate words in an identifier (e.g, element_count) rather than alternatives, such as elementCount and ElementCount. Never use names with all capital letter (e.g., BEGIN_TRANSACTION) because that's conventionally reserved for macros. Even if you don't use macros, someone might have littered your header files with them. Use an initial capital letter for types (e.g., Square and Graph). The C++ language and standard library don't use capital letters, so it's int rather than Int and string rather than String. That way, you can recognize the standard types.

Avoid names that are easy to mistype, misread, or confuse. For example
name names nameS
foo f00
fl f1 fI fi
The characters 0, o, O, 1, l, and I are particularly prone to cause trouble.

Often, your choice of naming conventions is limited by local style rules. Remember that a maintaining a consistent style is often more important than doing every little detail in the way you think best.

Should I put "const" before or after the type?
I put it before, but that's a matter of taste. "const T" and "T const" were - and are - (both) allowed and equivalent. For example:
const int a = 1; // ok
int const b = 2; // also ok
My guess is that using the first version will confuse fewer programmers (``is more idiomatic'').

Why? When I invented "const" (initially named "readonly" and had a corresponding "writeonly"), I allowed it to go before or after the type because I could do so without ambiguity. Pre-standard C and C++ imposed few (if any) ordering rules on specifiers.

I don't remember any deep thoughts or involved discussions about the order at the time. A few of the early users - notably me - simply liked the look of
const int c = 10;
better than
int const c = 10;
at the time.

I may have been influenced by the fact that my earliest examples were written using "readonly" and
readonly int c = 10;
does read better than
int readonly c = 10;
The earliest (C or C++) code using "const" appears to have been created (by me) by a global substitution of "const" for "readonly".

I remember discussing syntax alternatives with several people - incl. Dennis Ritchie - but I don't remember which languages I looked at then.

Note that in const pointers, "const" always comes after the "*". For example:
int *const p1 = q; // constant pointer to int variable
int const* p2 = q; // pointer to constant int
const int* p3 = q; // pointer to constant int

What good is static_cast?
Casts are generally best avoided. With the exception of dynamic_cast, their use implies the possibility of a type error or the truncation of a numeric value. Even an innocent-looking cast can become a serious problem if, during development or maintenance, one of the types involved is changed. For example, what does this mean?:
x = (T)y;
We don't know. It depends on the type T and the types of x and y. T could be the name of a class, a typedef, or maybe a template parameter. Maybe x and y are scalar variables and (T) represents a value conversion. Maybe x is of a class derived from y's class and (T) is a downcast. Maybe x and y are unrelated pointer types. Because the C-style cast (T) can be used to express many logically different operations, the compiler has only the barest chance to catch misuses. For the same reason, a programmer may not know exactly what a cast does. This is sometimes considered an advantage by novice programmers and is a source of subtle errors when the novice guessed wrong.

The "new-style casts" were introduced to give programmers a chance to state their intentions more clearly and for the compiler to catch more errors. For example:
int a = 7;
double* p1 = (double*) &a; // ok (but a is not a double)
double* p2 = static_cast<double*>(&a); // error
double* p2 = reinterpret_cast<double*>(&a); // ok: I really mean it

const int c = 7;
int* q1 = &c; // error
int* q2 = (int*)&c; // ok (but *q2=2; is still invalid code and may fail)
int* q3 = static_cast<int*>(&c); // error: static_cast doesn't cast away const
int* q4 = const_cast<int*>(&c); // I really mean it
The idea is that conversions allowed by static_cast are somewhat less likely to lead to errors than those that require reinterpret_cast. In principle, it is possible to use the result of a static_cast without casting it back to its original type, whereas you should always cast the result of a reinterpret_cast back to its original type before using it to ensure portability.

A secondary reason for introducing the new-style cast was that C-style casts are very hard to spot in a program. For example, you can't conveniently search for casts using an ordinary editor or word processor. This near-invisibility of C-style casts is especially unfortunate because they are so potentially damaging. An ugly operation should have an ugly syntactic form. That observation was part of the reason for choosing the syntax for the new-style casts. A further reason was for the new-style casts to match the template notation, so that programmers can write their own casts, especially run-time checked casts.

Maybe, because static_cast is so ugly and so relatively hard to type, you're more likely to think twice before using one? That would be good, because casts really are mostly avoidable in modern C++.

So, what's wrong with using macros?
Macros do not obey the C++ scope and type rules. This is often the cause of subtle and not-so-subtle problems. Consequently, C++ provides alternatives that fit better with the rest of C++, such as inline functions, templates, and namespaces.

Consider:
#include "someheader.h"

struct S {
int alpha;
int beta;
};
If someone (unwisely) has written a macro called "alpha" or a macro called "beta" this may not compile or (worse) compile into something unexpected. For example, "someheader.h" may contain:
#define alpha 'a'
#define beta b[2]
Conventions such as having macros (and only macros) in ALLCAPS helps, but there is no language-level protection against macros. For example, the fact that the member names were in the scope of the struct didn't help: Macros operate on a program as a stream of characters before the compiler proper sees it. This, incidentally, is a major reason for C and C++ program development environments and tools have been unsophisticated: the human and the compiler see different things.

Unfortunately, you cannot assume that other programmers consistently avoid what you consider "really stupid". For example, someone recently reported to me that they had encountered a macro containing a goto. I have seen that also and heard arguments that might - in a weak moment - appear to make sense. For example:
#define prefix get_ready(); int ret__
#define Return(i) ret__=i; do_something(); goto exit
#define suffix exit: cleanup(); return ret__

void f()
{
prefix;
// ...
Return(10);
// ...
Return(x++);
//...
suffix;
}
Imagine being presented with that as a maintenance programmer; "hiding" the macros in a header - as is not uncommon - makes this kind of "magic" harder to spot.

One of the most common subtle problems is that a function-style macro doesn't obey the rules of function argument passing. For example:
#define square(x) (x*x)

void f(double d, int i)
{
square(d); // fine
square(i++); // ouch: means (i++*i++)
square(d+1); // ouch: means (d+1*d+1); that is, (d+d+1)
// ...
}
The "d+1" problem is solved by adding parentheses in the "call" or in the macro definition:
#define square(x) ((x)*(x)) /* better */
However, the problem with the (presumably unintended) double evaluation of i++ remains.

And yes, I do know that there are things known as macros that doesn't suffer the problems of C/C++ preprocessor macros. However, I have no ambitions for improving C++ macros. Instead, I recommend the use of facilities from the C++ language proper, such as inline functions, templates, constructors (for initialization), destructors (for cleanup), exceptions (for exiting contexts), etc.

How do you pronounce "cout"?
"cout" is pronounced "see-out". The "c" stands for "character" because iostreams map values to and from byte (char) representations.

How do you pronounce "char"?
"char" is usually pronounced "tchar", not "kar". This may seem illogical because "character" is pronounced "ka-rak-ter", but nobody ever accused English pronunciation (not "pronounciation" :-) and spelling of being logical.

Source: http://www2.research.att.com/~bs/bstechfaq.htm

========================================================================================

C++ Style and Technique FAQ (中文版)

Bjarne Stroustrup 著, 紫云英 译
[注: 本访谈录之译文经Stroustrup博士授权。如要转载,请和我联系: [email protected] ]

Q: 这个简单的程序……我如何把它搞定?

A: 常常有人问我一些简单的程序该如何写,这在学期之初时尤甚。一个典型的问题是:如何读入一些数字,做些处理(比如数学运算),然后输出……好吧好吧,这里我给出一个“通用示范程序”:
 #include<iostream> #include<vector> #include<algorithm> using namespace std; 

	int main()
	{
		vector<double> v;

		double d;
		while(cin>>d) v.push_back(d);	// read elements
		if (!cin.eof()) {		// check if input failed
			cerr << "format error/n";
			return 1;	// error return
		}
		cout << "read " << v.size() << " elements/n";
		reverse(v.begin(),v.end());
		cout << "elements in reverse order:/n";
		for (int i = 0; i<v.size(); ++i) cout << v[i] << '/n';

		return 0; // success return
	}
程序很简单,是吧。这里是对它的一些“观察报告”:
  • 这是一个用标准C++写的程序,使用了标准库[译注:标准库主要是将原来的C运行支持库(Standard C Library)、iostream库、STL(Standard Template Library,标准模板库)等标准化而得的] 。标准库提供的功能都位于namespace std之中,使用标准库所需包含的头文件是不含.h扩展名的。[译注:有些编译器厂商为了兼容性也提供了含.h扩展名的头文件。]
  • 如果你在Windows下编译,你需要把编译选项设为“console application”。记住,你的源代码文件的扩展名必须为.cpp,否则编译器可能会把它当作C代码来处理。
  • 主函数main()要返回一个整数。[译注:有些编译器也支持void main()的定义,但这是非标准做法]
  • 将输入读入标准库提供的vector容器可以保证你不会犯“缓冲区溢出”之类错误——对于初学者来说,硬是要求“把输入读到一个数组之中,不许犯任何‘愚蠢的错误’”似乎有点过份了——如果你真能达到这样的要求,那你也不能算完全的初学者了。如果你不相信我的这个论断,那么请看看我写的《Learning Standard C++ as a New Language》一文。 [译注:CSDN文档区有该文中译。]
  • 代码中“ !cin.eof() ”是用来测试输入流的格式的。具体而言,它测试读输入流的循环是否因遇到EOF而终止。如果不是,那说明输入格式不对(不全是数字)。还有细节地方不清楚,可以参看你使用的教材中关于“流状态”的章节。
  • Vector是知道它自己的大小的,所以不必自己清点输入了多少元素。
  • 这个程序不含任何显式内存管理代码,也不会产生内存泄漏。Vector会自动配置内存,所以用户不必为此烦心。
  • 关于如何读入字符串,请参阅后面的“我如何从标准输入中读取string” 条目。
  • 这个程序以EOF为输入终止的标志。如果你在UNIX上运行这个程序,可以用Ctrl-D输入EOF。但你用的Windows版本可能会含有一个bug(http://support.microsoft.com/support/kb/articles/Q156/2/58.asp?LN=EN-US&SD=gn&FR=0&qry=End of File&rnk=11&src=DHCS_MSPSS_gn_SRCH&SPR=NTW40 ),导致系统无法识别EOF字符。如果是这样,那么也许下面这个有稍许改动的程序更适合你:这个程序以单词“end”作为输入终结的标志。
    	#include<iostream>
    	#include<vector>
    	#include<algorithm>
    	#include<string>
    	using namespace std;
    
    	int main()
    	{
    		vector<double> v;
    
    		double d;
    		while(cin>>d) v.push_back(d);	// read elements
    		if (!cin.eof()) {		// check if input failed
    			cin.clear();		// clear error state
    			string s;
    			cin >> s;		// look for terminator string
    			if (s != "end") {
    				cerr << "format error/n";
    				return 1;	// error return
    			}
    		}
    
    		cout << "read " << v.size() << " elements/n";
    
    		reverse(v.begin(),v.end());
    		cout << "elements in reverse order:/n";
    		for (int i = 0; i<v.size(); ++i) cout << v[i] << '/n';
    
    		return 0; // success return
    	}
    
    
    
《The C++ Programming Language 》第三版中关于标准库的章节里有更多更详细例子,你可以通过它们学会如何使用标准库来“轻松搞定简单任务”。

Q: 为何我编译一个程序要花那么多时间?

A: 也许是你的编译器有点不太对头——它是不是年纪太大了,或者没有安装正确?也可能你的电脑该进博物馆了……对于这样的问题我可真是爱莫能助了。

不过,也有可能原因在于你的程序——看看你的程序设计还能不能改进?编译器是不是为了顺利产出正确的二进制码而不得不吃进成百个头文件、几万行的源代码?原则上,只要对源码适当优化一下,编译缓慢的问题应该可以解决。如果症结在于你的类库供应商,那么你大概除了“换一家类库供应商”外确实没什么可做的了;但如果问题在于你自己的代码,那么完全可以通过重构(refactoring)来让你的代码更为结构化,从而使源码一旦有更改时需重编译的代码量最小。这样的代码往往是更好的设计:因为它的藕合程度较低,可维护性较佳。

我们来看一个OOP的经典例子:

 class Shape { public: // interface to users of Shapes virtual void draw() const; virtual void rotate(int degrees); // ... protected: // common data (for implementers of Shapes) Point center; Color col; // ... }; class Circle : public Shape { public: void draw() const; void rotate(int) { } // ... protected: 

 int radius;  // ... }; class Triangle : public Shape { public: void draw() const; void rotate(int); // ... protected:  

 Point a, b, c;  // ... };  



上述代码展示的设计理念是:让用户通过Shape的公共界面来处理“各种形状”;而Shape的保护成员提供了各继承类(比如Circle,Triangle)共同需要的功能。也就是说:将各种形状(shapes)的公共因素划归到基类Shape中去。这种理念看来很合理,不过我要提请你注意:
  • 要确认“哪些功能会被所有的继承类用到,而应在基类中实作”可不是件简单的事。所以,基类的保护成员或许会随着要求的变化而变化,其频度远高于公共界面之可能变化。例如,尽管我们把“center”作为所有形状的一个属性(从而在基类中声明)似乎是天经地义的,但因此而要在基类中时时维护三角形的中心坐标是很麻烦的,还不如只在需要时才计算——这样可以减少开销。
  • 和抽象的公共界面不同,保护成员可能会依赖实作细节,而这是Shape类的使用者所不愿见到的。例如,绝大部分使用Shape的代码应该逻辑上和color无关;但只要color的声明在Shape类中出现了,就往往会导致编译器将定义了“该操作系统中颜色表示”的头文件读入、展开、编译。这都需要时间!
  • 当基类中保护成员(比如前面说的center,color)的实作有所变化,那么所有使用了Shape类的代码都需要重新编译——哪怕这些代码中只有很少是真正要用到基类中的那个“语义变化了的保护成员”。

所以,在基类中放一些“对于继承类之实作有帮助”的功能或许是出于好意,但实则是麻烦的源泉。用户的要求是多变的,所以实作代码也是多变的。将多变的代码放在许多继承类都要用到的基类之中,那么变化可就不是局部的了,这会造成全局影响的!具体而言就是:基类所倚赖的一个头文件变动了,那么所有继承类所在的文件都需重新编译。

这样分析过后,解决之道就显而易见了:仅仅把基类用作为抽象的公共界面,而将“对继承类有用”的实作功能移出。

 class Shape { public: // interface to users of Shapes virtual void draw() const = 0; virtual void rotate(int degrees) = 0; virtual Point center() const = 0; // ... 

 // no data  }; class Circle : public Shape { public: void draw() const; void rotate(int) { } Point center() const { return center; } // ... protected:  

 Point cent; Color col; int radius;  // ... }; class Triangle : public Shape { public: void draw() const; void rotate(int); Point center() const; // ... protected:  

 Color col; Point a, b, c;  // ... };  



这样,继承类的变化就被孤立起来了。由变化带来的重编译时间可以极为显著地缩短。

但是,如果确实有一些功能是要被所有继承类(或者仅仅几个继承类)共享的,又不想在每个继承类中重复这些代码,那怎么办?也好办:把这些功能封装成一个类,如果继承类要用到这些功能,就让它再继承这个类:

 class Shape { public: // interface to users of Shapes virtual void draw() const = 0; virtual void rotate(int degrees) = 0; virtual Point center() const = 0; // ... 

 // no data  };  

 struct Common { Color col; // ... };  class Circle : public Shape, protected Common { public: void draw() const; void rotate(int) { } Point center() const { return center; } // ... protected:  

 Point cent; int radius;  }; class Triangle : public Shape, protected Common { public: void draw() const; void rotate(int); Point center() const; // ... protected:  

 Point a, b, c;  };  



[译注:这里作者的思路就是孤立变化,减少耦合。从这个例子中读者可以学到一点Refactoring的入门知识 :O) ]


Q: 为何空类的大小不是零?

A: 为了确保两个不同对象的地址不同,必须如此。也正因为如此,new返回的指针总是指向不同的单个对象。我们还是来看代码吧:
	class Empty { };

	void f()
	{
		Empty a, b;
		if (&a == &b) cout << "impossible: report error to compiler supplier";

		Empty* p1 = new Empty;
		Empty* p2 = new Empty;
		if (p1 == p2) cout << "impossible: report error to compiler supplier";
	}	


另外,C++中有一条有趣的规则——空基类并不需要另外一个字节来表示:
	struct X : Empty {
		int a;
		// ...
	};

	void f(X* p)
	{
		void* p1 = p;
		void* p2 = &p->a;
		if (p1 == p2) cout << "nice: good optimizer";
	}


如果上述代码中p1和p2相等,那么说明编译器作了优化。这样的优化是安全的,而且非常有用。它允许程序员用空类来表示非常简单的概念,而不需为此付出额外的(空间)代价。一些现代编译器提供了这种“虚基类优化”功能。


Q: 为什么我必须把数据放到类的声明之中?

A: 没人强迫你这么做。如果你不希望界面中有数据,那么就不要把它放在定义界面的类中,放到继承类中好了。参看“为何我编译一个程序要花那么多时间” 条目。[译注:本FAQ中凡原文为declare/declaration的均译为声明;define/definition均译为定义。两者涵义之基本差别参见后面“‘int* p;’和‘int *p;’到底哪个正确 条目中的译注。通常而言,我们还是将下面的示例代码称为complex类的定义 ,而将单单一行“class complex;”称作声明 。]
但也有的时候你确实需要把数据放到类声明里面,比如下面的复数类的例子:
	template<class Scalar> class complex {
	public:
		complex() : re(0), im(0) { }
		complex(Scalar r) : re(r), im(0) { }
		complex(Scalar r, Scalar i) : re(r), im(i) { }
		// ...

		complex& operator+=(const complex& a)
			{ re+=a.re; im+=a.im; return *this; }
		// ...
	private:
		Scalar re, im;
	};


这个complex(复数)类是被设计成像C++内置类型那样使用的,所以数据表示必须出现在声明之中,以便可以建立真正的本地对象(即在堆栈上分配的对象,而非在堆中分配),这同时也确保了简单操作能被正确内联化。“本地对象”和“内联”这两点很重要,因为这样才可以使我们的复数类达到和内置复数类型的语言相当的效率。

[译注:我觉得Bjarne的这段回答有点“逃避问题”之嫌。我想,提问者的真实意图或许是想知道如何用C++将“界面”与“实作”完全分离。不幸的是,C++语言和类机制本身不提供这种方式。我们都知道,类的“界面”部分往往被定义为公有(一般是一些虚函数);“实作”部分则往往定义为保护或私有(包括函数和数据);但无论是“public”段还是“protected”、“private”段都必须出现在类的声明中,随类声明所在的头文件一起提供。想来这就是“为何数据必须放到类声明中”问题的由来吧。为了解决这个问题,我们有个变通的办法:使用Proxy模式(参见《Design Patterns : Elements of Reusable Object-Oriented Software》一书),我们可以将实作部分在proxy类中声明(称为“对象组合”),而不将proxy类的声明暴露给用户。例如:

	class Implementer; // forward declaration

	class Interface {
	public:
		// interface

	private:
		Implementer impl;
	};

在这个例子中,Implementer类就是proxy。在Interface中暴露给用户的只是一个impl对象的“存根”,而无实作内容。Implementer类可以如下声明:

 class Implementer {
public:
// implementation details, including data members
};

上述代码中的注释处可以存放提问者所说的“数据”,而Implementer的声明代码不需暴露给用户。不过,Proxy模式也不是十全十美的——Interface通过impl指针间接调用实作代码带来了额外的开销。或许读者会说,C++不是有内联机制吗?这个开销能通过内联定义而弥补吧。但别忘了,此处运用Proxy模式的目的就是把“实作”部分隐藏起来,这“隐藏”往往就意味着“实作代码”以链接库中的二进制代码形式存在。目前的C++编译器和链接器能做到既“代码内联”又“二进制隐藏”吗?或许可以。那么Proxy模式又能否和C++的模板机制“合作愉快”呢?(换句话说,如果前面代码中Interface和Implementer的声明均不是class,而是template,又如何呢?)关键在于,编译器对内联和模板的支持之实作是否需要进行源码拷贝,还是可以进行二进制码拷贝。目前而言,C#的泛型支持之实作是在Intermediate Language层面上的,而C++则是源码层面上的。Bjarne给出的复数类声明代码称“数据必须出现在类声明中”也是部分出于这种考虑。呵呵,扯远了……毕竟,这段文字只是FAQ的“译注”而已,此处不作更多探讨,有兴趣的读者可以自己去寻找答案 :O) ]

Q: 为何成员函数不是默认为虚?

A: 因为许多类不是被用来做基类的。[译注:用来做基类的类常类似于其它语言中的interface概念——它们的作用是为一组类定义一个公共介面。但C++中的类显然还有许多其他用途——比如表示一个具体的扩展类型。] 例如,复数类就是如此。

另外,有虚函数的类有虚机制的开销[译注:指存放vtable带来的空间开销和通过vtable中的指针间接调用带来的时间开销] ,通常而言每个对象增加的空间开销是一个字长。这个开销可不小,而且会造成和其他语言(比如C,Fortran)的不兼容性——有虚函数的类的内存数据布局和普通的类是很不一样的。[译注:这种内存数据布局的兼容性问题会给多语言混合编程带来麻烦。]

《The Design and Evolution of C++》 中有更多关于设计理念的细节。


Q: 为何析构函数不是默认为虚?

A: 哈,你大概知道我要说什么了 :O) 仍然是因为——许多类不是被用来做基类的。只有在类被作为interface使用时虚函数才有意义。(这样的类常常在内存堆上实例化对象并通过指针或引用访问。)

那么,何时我该让析构函数为虚呢?哦,答案是——当类有其它虚函数的时候,你就应该让析构函数为虚。有其它虚函数,就意味着这个类要被继承,就意味着它有点“interface”的味道了。这样一来,程序员就可能会以基类指针来指向由它的继承类所实例化而来的对象,而能否通过基类指针来正常释放这样的对象就要看析构函数是否为虚了。 例如:

	class Base {
		// ...
		virtual ~Base();
	};

	class Derived : public Base {
		// ...
		~Derived();
	};

	void f()
	{
		Base* p = new Derived;
		delete p;	// virtual destructor used to ensure that ~Derived is called
	}


如果Base的析构函数不是虚的,那么Derived的析构函数就不会被调用——这常常会带来恶果:比如,Derived中分配的资源没有被释放。


Q: C++中为何没有虚拟构造函数?

A: 虚拟机制的设计目的是使程序员在不完全了解细节(比如只知该类实现了某个界面,而不知该类确切是什么东东)的情况下也能使用对象。但是,要建立一个对象,可不能只知道“这大体上是什么”就完事——你必须完全了解全部细节,清楚地知道你要建立的对象是究竟什么。所以,构造函数当然不能是虚的了。
不过有时在建立对象时也需要一定的间接性,这就需要用点技巧来实现了。(详见《The C++ Programming Language》,第三版,15.6.2)这样的技巧有时也被称作“虚拟构造函数”。我这里举个使用抽象类来“虚拟构造对象”的例子:
	struct F {	// interface to object creation functions
		virtual A* make_an_A() const = 0;
		virtual B* make_a_B() const = 0;
	};

	void user(const F& fac)
	{
		A* p = fac.make_an_A();	// make an A of the appropriate type
		B* q = fac.make_a_B();	// make a B of the appropriate type
		// ...
	}

	struct FX : F {
		A* make_an_A() const { return new AX();	} // AX is derived from A
		B* make_a_B() const { return new BX();	} // BX is derived from B
	
	};

	struct FY : F {
		A* make_an_A() const { return new AY();	} // AY is derived from A
		B* make_a_B() const { return new BY();	} // BY is derived from B

	};

	int main()
	{
		user(FX());	// this user makes AXs and BXs
		user(FY());	// this user makes AYs and BYs
		// ...
	}


看明白了没有?上述代码其实运用了Factory模式的一个变体。关键之处是,user()被完全孤立开了——它对AX,AY这些类一无所知。(嘿嘿,有时无知有无知的好处 ^_^)


Q: 为何无法在派生类中重载?

A: 这个问题常常是由这样的例子中产生的:
	#include<iostream>
	using namespace std;

	class B {
	public:
		int f(int i) { cout << "f(int): "; return i+1; }
		// ...
	};

	class D : public B {
	public:
		double f(double d) { cout << "f(double): "; return d+1.3; }
		// ...
	};

	int main()
	{
		D* pd = new D;

		cout << pd->f(2) << '/n';
		cout << pd->f(2.3) << '/n';
	}


程序运行结果是:
	f(double): 3.3
f(double): 3.6
而不是某些人(错误地)猜想的那样:
	f(int): 3
f(double): 3.6

换句话说,在D和B之间没有重载发生。你调用了pd->f(),编译器就在D的名字域里找啊找,找到double f(double)后就调用它了。编译器懒得再到B的名字域里去看看有没有哪个函数更符合要求。记住,在C++中,没有跨域重载——继承类和基类虽然关系很亲密,但也不能坏了这条规矩。详见《The Design and Evolution of C++》或者《The C++ Programming Language》第三版。

不过,如果你非得要跨域重载,也不是没有变通的方法——你就把那些函数弄到同一个域里来好了。使用一个using声明就可以搞定。

 class D : public B { public: using B::f; // make every f from B available double f(double d) { cout << "f(double): "; return d+1.3; } // ... };  



这样一来,结果就是
	f(int): 3
	f(double): 3.6

重载发生了——因为D中的那句 using B::f 明确告诉编译器,要把B域中的f引入当前域,请编译器“一视同仁”。


Q: 我能从构造函数调用虚函数吗?

A: 可以。不过你得悠着点。当你这样做时,也许你自己都不知道自己在干什么!在构造函数中,虚拟机制尚未发生作用,因为此时overriding尚未发生。万丈高楼平地起,总得先打地基吧?对象的建立也是这样——先把基类构造完毕,然后在此基础上构造派生类。
看看这个例子:
	#include<string>
	#include<iostream>
	using namespace std;

	class B {
	public:
		B(const string& ss) { cout << "B constructor/n"; f(ss); }
		virtual void f(const string&) { cout << "B::f/n";}
	};

	class D : public B {
	public:
		D(const string & ss) :B(ss) { cout << "D constructor/n";}
		void f(const string& ss) { cout << "D::f/n"; s = ss; }
	private:
		string s;
	};

	int main()
	{
		D d("Hello");
	}


这段程序经编译运行,得到这样的结果:
	B constructor
B::f
D constructor
注意,输出不是D::f 

。 究竟发生了什么?f()是在B::B()中调用的。如果构造函数中调用虚函数的规则不是如前文所述那样,而是如一些人希望的那样去调用D::f()。那么因为构造函数D::D()尚未运行,字符串s还未初始化,所以当D::f()试图将参数赋给s时,结果多半是——立马当机。

析构则正相反,遵循从继承类到基类的顺序(拆房子总得从上往下拆吧?),所以其调用虚函数的行为和在构造函数中一样:虚函数此时此刻 被绑定到哪里(当然应该是基类啦——因为继承类已经被“拆”了——析构了!),调用的就是哪个函数。

更多细节请见《The Design and Evolution of C++》,13.2.4.2 或者《The C++ Programming Language》第三版,15.4.3 。

有时,这条规则被解释为是由于编译器的实作造成的。[译注:从实作角度可以这样解释:在许多编译器中,直到构造函数调用完毕,vtable才被建立,此时虚函数才被动态绑定至继承类的同名函数。] 但事实上不是这么一回事——让编译器实作成“构造函数中调用虚函数也和从其他函数中调用一样”是很简单的[译注:只要把vtable的建立移至构造函数调用之前即可] 。关键还在于语言设计时的考量——让虚函数可以求助于基类提供的通用代码。[译注:先有鸡还是先有蛋?Bjarne实际上是在告诉你,不是“先有实作再有规则”,而是“如此实作,因为规则如此”。]


Q: 有"placement delete"吗?

A: 没有。不过如果你真的想要,你就说嘛——哦不,我的意思是——你可以自己写一个。
我们来看看将对象放至某个指定场所的placement new:
	class Arena {
public:
void* allocate(size_t);
void deallocate(void*);
		// ...
};


void* operator new(size_t sz, Arena& a)
{
return a.allocate(sz);
}


Arena a1(some arguments);
Arena a2(some arguments);

现在我们可以写:
	X* p1 = new(a1) X;
	Y* p2 = new(a1) Y;
	Z* p3 = new(a2) Z;
	// ...

但之后我们如何正确删除这些对象?没有内置“placement delete”的理由是,没办法提供一个通用的placement delete。C++的类型系统没办法让我们推断出p1是指向被放置在a1中的对象。即使我们能够非常天才地推知这点,一个简单的指针赋值操作也会让我们重陷茫然。不过,程序员本人应该知道在他自己的程序中什么指向什么,所以可以有解决方案:
	template<class T> void destroy(T* p, Arena& a)
	{
		if (p) {
			p->~T();		// explicit destructor call
			a.deallocate(p);
		}
	}


这样我们就可以写:
destroy(p1,a1);
destroy(p2,a2);
destroy(p3,a3);
如果Arena自身跟踪放置其中的对象,那么你可以安全地写出destroy()函数 ,把“保证无错”的监控任务交给Arena,而不是自己承担。

如何在类继承体系中定义配对的operator new()和 operator delete() 可以参看《The C++ Programming Language》,Special Edition,15.6节,《The Design and Evolution of C++》,10.4节,以及《The C++ Programming Language》,Special Edition,19.4.5节。[译注:此处按原文照译。前面有提到“参见《The C++ Programming Language》第三版”的,实际上特别版(Special Edition)和较近重印的第三版没什么区别。]


A: 可以的,但何必呢?好吧,也许有两个理由:

  • 出于效率考虑——不希望我的函数调用是虚的
  • 出于安全考虑——确保我的类不被用作基类(这样我拷贝对象时就不用担心对象被切割(slicing)了)[译注:“对象切割”指,将派生类对象赋给基类变量时,根据C++的类型转换机制,只有包括在派生类中的基类部分被拷贝,其余部分被“切割”掉了。]
根据我的经验,“效率考虑”常常纯属多余。在C++中,虚函数调用如此之快,和普通函数调用并没有太多的区别。请注意,只有通过指针或者引用调用时才会启用虚拟机制;如果你指名道姓地调用一个对象,C++编译器会自动优化,去除任何的额外开销。

如果为了和“虚函数调用”说byebye,那么确实有给类继承体系“封顶”的需要。在设计前,不访先问问自己,这些函数为何要被设计成虚的。我确实见过这样的例子:性能要求苛刻的函数被设计成虚的,仅仅因为“我们习惯这样做”!

好了,无论如何,说了那么多,毕竟你只是想知道,为了某种合理的理由,你能不能防止别人继承你的类。答案是可以的。可惜,这里给出的解决之道不够干净利落。你不得不在在你的“封顶类”中虚拟继承一个无法构造的辅助基类。还是让例子来告诉我们一切吧:

	class Usable;

	class Usable_lock {
friend class Usable;
private:
Usable_lock() {}
Usable_lock(const Usable_lock&) {}
};

class Usable : public virtual Usable_lock {
// ...
public:
Usable();
Usable(char*);
// ...
};

Usable a;

class DD : public Usable { };

DD dd; // error: DD::DD() cannot access
// Usable_lock::Usable_lock(): privatemember
(参见《The Design and Evolution of C++》,11.4.3节)


Q: 为什么我无法限制模板的参数?

A: 呃,其实你是可以的。而且这种做法并不难,也不需要什么超出常规的技巧。

让我们来看这段代码:

	template<class Container>
	void draw_all(Container& c)
	{
		for_each(c.begin(),c.end(),mem_fun(&Shape::draw));
	}

如果c不符合constraints,出现了类型错误,那么错误将发生在相当复杂的for_each解析之中。比如说,参数化的类型被要求实例化int型,那么我们无法为之调用Shape::draw()。而我们从编译器中得到的错误信息是含糊而令人迷惑的——因为它和标准库中复杂的for_each纠缠不清。

为了早点捕捉到这个错误,我们可以这样写代码:

	template<class Container>
	void draw_all(Container& c)
	{
		Shape* p = c.front();

 // accept only containers of Shape*s

		for_each(c.begin(),c.end(),mem_fun(&Shape::draw));
	}

我们注意到,前面加了一行Shape *p的定义(尽管就程序本身而言,p是无用的)。如果不可将c.front()赋给Shape *p,那么就大多数现代编译器而言,我们都可以得到一条含义清晰的出错信息。这样的技巧在所有语言中都很常见,而且对于所有“不同寻常的构造”都不得不如此。[译注:意指对于任何语言,当我们开始探及极限,那么不得不写一些高度技巧性的代码。]
不过这样做不是最好。如果要我来写实际代码,我也许会这样写:
	template<class Container>
	void draw_all(Container& c)
	{
		typedef typename Container::value_type T;
		Can_copy<T,Shape*>(); // accept containers of only Shape*s

		for_each(c.begin(),c.end(),mem_fun(&Shape::draw));
	}


这就使代码通用且明显地体现出我的意图——我在使用断言[译注:即明确断言typename Container是draw_all()所接受的容器类型,而不是令人迷惑地定义了一个Shape *指针,也不知道会不会在后面哪里用到] 。Can_copy()模板可被这样定义:
	template<class T1, class T2> struct Can_copy {
		static void constraints(T1 a, T2 b) { T2 c = a; b = a; }
		Can_copy() { void(*p)(T1,T2) = constraints; }
	};

Can_copy在编译期间检查确认T1可被赋于T2。Can_copy<T,Shape*>检查确认T是一个Shape*类型,或者是一个指向Shape的公有继承类的指针,或者是用户自定义的可被转型为Shape *的类型。注意,这里Can_copy()的实现已经基本上是最优化的了:一行代码用来指明需要检查的constraints[译注:指第1行代码;constraints为T2], 和要对其做这个检查的类型[译注:要作检查的类型为T1] ;一行代码用来精确列出所要检查是否满足的constraints(constraints()函数) [译注:第2行之所以要有2个子句并不是重复,而是有原因的。如果T1,T2均是用户自定义的类,那么T2 c = a; 检测能否缺省构造;b = a; 检测能否拷贝构造] ;一行代码用来提供执行这些检查的机会 [译注:指第3行。Can_copy是一个模板类;constraints是其成员函数,第2行只是定义,而未执行] 。
[译注:这里constraints实现的关键是依赖C++强大的类型系统,特别是类的多态机制。第2行代码中T2 c = a; b = a; 能够正常通过编译的条件是:T1实现了T2的接口。具体而言,可能是以下4种情况:(1) T1,T2 同类型 (2) 重载operator = (3) 提供了 cast operator (类型转换运算符)(4) 派生类对象赋给基类指针。说到这里,记起我曾在以前的一篇文章中说到,C++的genericity实作——template不支持constrained genericity,而Eiffel则从语法级别支持constrained genericity(即提供类似于template <typename T as Comparable> xxx 这样的语法——其中Comparable即为一个constraint)。曾有读者指出我这样说是错误的,认为C++ template也支持constrained genericity。现在这部分译文给出了通过使用一些技巧,将OOP和GP的方法结合,从而在C++中巧妙实现constrained genericity的方法。对于爱好C++的读者,这种技巧是值得细细品味的。不过也不要因为太执著于各种细枝末节的代码技巧而丧失了全局眼光。有时语言支持方面的欠缺可以在设计层面(而非代码层面)更优雅地弥补。另外,这能不能算“C++的template支持constrained genericity”,我保留意见。正如,用C通过一些技巧也可以OOP,但我们不说C语言支持OOP。]
请大家再注意,现在我们的定义具备了这些我们需要的特性:
  • 你可以不通过定义/拷贝变量就表达出constraints[译注:实则定义/拷贝变量的工作被封装在Can_copy模板中了] ,从而可以不必作任何“那个类型是这样被初始化”之类假设,也不用去管对象能否被拷贝、销毁(除非这正是constraints所在)。[译注:即——除非constraints正是“可拷贝”、“可销毁”。如果用易理解的伪码描述,就是template <typename T as Copy_Enabled> xxx,template <typename T as Destructible> xxx 。]
  • 如果使用现代编译器,constraints不会带来任何额外代码
  • 定义或者使用constraints均不需使用宏定义
  • 如果constraints没有被满足,编译器给出的错误消息是容易理解的。事实上,给出的错误消息包括了单词“constraints” (这样,编码者就能从中得到提示)、constraints的名称、具体的出错原因(比如“cannot initialize Shape* by double*”)

既然如此,我们干吗不干脆在C++语言本身中定义类似Can_copy()或者更优雅简洁的语法呢?The Design and Evolution of C++分析了此做法带来的困难。已经有许许多多设计理念浮出水面,只为了让含constraints的模板类易于撰写,同时还要让编译器在constraints不被满足时给出容易理解的出错消息。比方说,我在Can_copy中“使用函数指针”的设计就来自于Alex Stepanov和Jeremy Siek。我认为我的Can_copy()实作还不到可以标准化的程度——它需要更多实践的检验。另外,C++使用者会遭遇许多不同类型的constraints,目前看来还没有哪种形式的带constraints的模板获得压倒多数的支持。

已有不少关于constraints的“内置语言支持”方案被提议和实作。但其实要表述constraint根本不需要什么异乎寻常的东西:毕竟,当我们写一个模板时,我们拥有C++带给我们的强有力的表达能力。让代码来为我的话作证吧:

	template<class T, class B> struct Derived_from {
		static void constraints(T* p) { B* pb = p; }
		Derived_from() { void(*p)(T*) = constraints; }
	};

	template<class T1, class T2> struct Can_copy {
		static void constraints(T1 a, T2 b) { T2 c = a; b = a; }
		Can_copy() { void(*p)(T1,T2) = constraints; }
	};

	template<class T1, class T2 = T1> struct Can_compare {
		static void constraints(T1 a, T2 b) { a==b; a!=b; a<b; }
		Can_compare() { void(*p)(T1,T2) = constraints; }
	};

	template<class T1, class T2, class T3 = T1> struct Can_multiply {
		static void constraints(T1 a, T2 b, T3 c) { c = a*b; }
		Can_multiply() { void(*p)(T1,T2,T3) = constraints; }
	};

	struct B { };
	struct D : B { };
	struct DD : D { };
	struct X { };

	int main()
	{
		Derived_from<D,B>();
		Derived_from<DD,B>();
		Derived_from<X,B>();
		Derived_from<int,B>();
		Derived_from<X,int>();

		Can_compare<int,float>();
		Can_compare<X,B>();
		Can_multiply<int,float>();
		Can_multiply<int,float,double>();
		Can_multiply<B,X>();
	
		Can_copy<D*,B*>();
		Can_copy<D,B*>();
		Can_copy<int,B*>();
	}

	// the classical "elements must derived from Mybase*" constraint:

	template<class T> class Container : Derived_from<T,Mybase> {
		// ...
	};

事实上Derived_from并不检查继承性,而是检查可转换性。不过Derive_from常常是一个更好的名字——有时给constraints起个好名字也是件需细细考量的活儿。


Q: 我们已经有了 "美好的老qsort()",为什么还要用sort()?

A: 对于初学者而言,
	qsort(array,asize,sizeof(elem),elem_compare);


看上去有点古怪。还是
	sort(vec.begin(),vec.end());


比较好理解,是吧。那么,这点理由就足够让你舍qsort而追求sort了。对于老手来说,sort()要比qsort()快的事实也会让你心动不已。而且sort是泛型的,可以用于任何合理的容器组合、元素类型和比较算法。例如:
	struct Record {
		string name;
		// ...
	};

	struct name_compare {	// compare Records using "name" as the key
		bool operator()(const Record& a, const Record& b) const
			{ return a.name<b.name; }
	};

	void f(vector<Record>& vs)
	{
		sort(vs.begin(), vs.end(), name_compare());
		// ...
	}	


另外,还有许多人欣赏sort()的类型安全性——要使用它可不需要任何强制的类型转换。对于标准类型,也不必写compare()函数,省事不少。如果想看更详尽的解释,参看我的《Learning Standard C++ as a New Language》一文。

另外,为何sort()要比qsort()快?因为它更好地利用了C++的内联语法语义。


Q: 什么是function object?

A: Function object是一个对象,不过它的行为表现像函数。一般而言,它是由一个重载了operator()的类所实例化得来的对象。

Function object的涵义比通常意义上的函数更广泛,因为它可以在多次调用之间保持某种“状态”——这和静态局部变量有异曲同工之妙;不过这种“状态”还可以被初始化,还可以从外面来检测,这可要比静态局部变量强了。我们来看一个例子:

	class Sum {
		int val;
	public:
		Sum(int i) :val(i) { }
		operator int() const { return val; }		// extract value

		int operator()(int i) { return val+=i; }	// application
	};

	void f(vector v)
	{
		Sum s = 0;	// initial value 0
		s = for_each(v.begin(), v.end(), s);	// gather the sum of all elements
		cout << "the sum is " << s << "/n";
	
		// or even:
		cout << "the sum is " << for_each(v.begin(), v.end(), Sum(0)) << "/n";
	}


这里我要提请大家注意:一个function object可被漂亮地内联化(inlining),因为对于编译器而言,没有讨厌的指针来混淆视听,所以这样的优化很容易进行。[译注:这指的是将operator()定义为内联函数,可以带来效率的提高。] 作为对比,编译器几乎不可能通过优化将“通过函数指针调用函数”这一步骤所花的开销省掉,至少目前如此。

在标准库中function objects被广泛使用,这给标准库带来了极大的灵活性和可扩展性。

[译注:C++是一个博采众长的语言,function object的概念就是从functional programming中借来的;而C++本身的强大和表现力的丰富也使这种“拿来主义”成为可能。一般而言,在使用function object的地方也常可以使用函数指针;在我们还不熟悉function object的时候我们也常常是使用指针的。但定义一个函数指针的语法可不是太简单明了,而且在C++中指针早已背上了“错误之源”的恶名。更何况,通过指针调用函数增加了间接开销。所以,无论为了语法的优美还是效率的提高,都应该提倡使用function objects。

下面我们再从设计模式的角度来更深入地理解function objects:这是Visitor模式的典型应用。当我们要对某个/某些对象施加某种操作,但又不想将这种操作限定死,那么就可以采用Visitor模式。在Design Patterns一书中,作者把这种模式实作为:通过一个Visitor类来提供这种操作(在前面Bjarne Stroustrup的代码中,Sum就是一个Visitor的变体),用Visitor类实例化一个visitor对象(当然,在前面的代码中对应的是s);然后在Iterator的迭代过程中,为每一个对象调用visitor.visit()。这里visit()是Visitor类的一个成员函数,作用相当于Sum类中那个“特殊的成员函数”——operator();visit()也完全可以被定义为内联函数,以去除间接性,提高性能。在此提请读者注意,C++把重载的操作符也看作函数,只不过是具有特殊函数名的函数。所以实际上Design Patterns一书中Visitor模式的示范实作和这里function object的实作大体上是等价的。一个function object也就是一个特殊的Visitor。 ]


Q: 我应该怎样处理内存泄漏?

A: 很简单,只要写“不漏”的代码就完事了啊。显然,如果你的代码到处是new、delete、指针运算,那你想让它“不漏”都难。不管你有多么小心谨慎,君为人,非神也,错误在所难免。最终你会被自己越来越复杂的代码逼疯的——你将投身于与内存泄漏的奋斗之中,对bug们不离不弃,直至山峰没有棱角,地球不再转动。而能让你避免这样困境的技巧也不复杂:你只要倚重隐含在幕后的分配机制——构造和析构,让C++的强大的类系统来助你一臂之力就OK了。标准库中的那些容器就是很好的实例。它们让你不必化费大量的时间精力也能轻松惬意地管理内存。我们来看看下面的示例代码——设想一下,如果没有了string和vector,世界将会怎样?如果不用它们,你能第一次就写出毫无内存错误的同样功能代码吗?
	#include<vector>
	#include<string>
	#include<iostream>
	#include<algorithm>
	using namespace std;

	int main()	// small program messing around with strings
	{
		cout << "enter some whitespace-separated words:/n";
		vector<string> v;
		string s;
		while (cin>>s) v.push_back(s);

		sort(v.begin(),v.end());

		string cat;
		typedef vector<string>::const_iterator Iter;
		for (Iter p = v.begin(); p!=v.end(); ++p) cat += *p+"+";
		cout << cat << '/n';
	}


请注意这里没有显式的内存管理代码。没有宏,没有类型转换,没有溢出检测,没有强制的大小限制,也没有指针。如果使用function object和标准算法[译注:指标准库中提供的泛型算法] ,我连Iterator也可以不用。不过这毕竟只是一个小程序,杀鸡焉用牛刀?

当然,这些方法也并非无懈可击,而且说起来容易做起来难,要系统地使用它们也并不总是很简单。不过,无论如何,它们的广泛适用性令人惊讶,而且通过移去大量的显式内存分配/释放代码,它们确实增强了代码的可读性和可管理性。早在1981年,我就指出通过大幅度减少需要显式加以管理的对象数量,使用C++“将事情做对”将不再是一件极其费神的艰巨任务。

如果你的应用领域没有能在内存管理方面助你一臂之力的类库,那么如果你还想让你的软件开发变得既快捷又能轻松得到正确结果,最好是先建立这样一个库。

如果你无法让内存分配和释放成为对象的“自然行为”,那么至少你可以通过使用资源句柄来尽量避免内存泄漏。这里是一个示例:假设你需要从函数返回一个对象,这个对象是在自由内存堆上分配的;你可能会忘记释放那个对象——毕竟我们无法通过检查指针来确定其指向的对象是否需要被释放,我们也无法得知谁应该负责释放它。那么,就用资源句柄吧。比如,标准库中的auto_ptr就可以帮助澄清:“释放对象”责任究竟在谁。我们来看:

 #include<memory> #include<iostream> using namespace std; struct S { S() { cout << "make an S/n"; } ~S() { cout << "destroy an S/n"; } S(const S&) { cout << "copy initialize an S/n"; } S& operator=(const S&) { cout << "copy assign an S/n"; } }; S* f() { 

return new S; // who is responsible for deleting this S?  }; auto_ptr<S> g() {  

return auto_ptr<S>(new S); // explicitly transfer responsibility for deleting this S  } int main() { cout << "start main/n"; S* p = f(); cout << "after f() before g()/n"; // S* q = g(); // caught by compiler  

auto_ptr<S> q = g();  cout << "exit main/n"; // leaks *p // implicitly deletes *q }  



这里只是内存资源管理的例子;至于其它类型的资源管理,可以如法炮制。

如果在你的开发环境中无法系统地使用这种方法(比方说,你使用了第三方提供的古董代码,或者远古“穴居人”参与了你的项目开发),那么你在开发过程中可千万要记住使用内存防漏检测程序,或者干脆使用垃圾收集器(Garbage Collector)。


Q: 为何捕捉到异常后不能继续执行后面的代码呢?

A: 这个问题,换句话说也就是:为什么C++不提供这样一个原语,能使你处理异常过后返回到异常抛出处继续往下执行?[译注:比如,一个简单的resume语句,用法和已有的return语句类似,只不过必须放在exception handler的最后。]

嗯,从异常处理代码返回到异常抛出处继续执行后面的代码的想法很好[译注:现行异常机制的设计是:当异常被抛出和处理后,从处理代码所在的那个catch块往下执行] ,但主要问题在于——exception handler不可能知道为了让后面的代码正常运行,需要做多少清除异常的工作[译注:毕竟,当有异常发生,事情就有点不太对劲了,不是吗;更何况收拾烂摊子永远是件麻烦的事] ,所以,如果要让“继续执行”能够正常工作,写throw代码的人和写catch代码的人必须对彼此的代码都很熟悉,而这就带来了复杂的相互依赖关系[译注:既指开发人员之间的“相互依赖”,也指代码间的相互依赖——紧耦合的代码可不是好代码哦 :O) ] ,会带来很多麻烦的维护问题。

在我设计C++的异常处理机制的时候,我曾认真地考虑过这个问题;在C++标准化的过程中,这个问题也被详细地讨论过。(参见《The Design andEvolution of C++》中关于异常处理的章节)如果你想试试看在抛出异常之前能不能解决问题然后继续往下执行,你可以先调用一个“检查—恢复”函数,然后,如果还是不能解决问题,再把异常抛出。一个这样的例子是new_handler。


Q: 为何C++中没有C中realloc()的对应物?

A: 如果你一定想要的话,你当然可以使用realloc()。不过,realloc() 只和通过malloc()之类C函数分配得到的内存“合作愉快”,在分配的内存中不能有具备用户自定义构造函数的对象。请记住:与某些天真的人们的想象相反,realloc()必要时是会拷贝大块的内存到新分配的连续空间中的。所以,realloc没什么好的 ^_^

在C++中,处理内存重分配的较好办法是使用标准库中的容器,比如vector。[译注:这些容器会自己管理需要的内存,在必要时会“增长尺寸”——进行重分配。]


Q: 我如何使用异常处理?

A: 参见《The C++ Programming Language》14章8.3节,以及附录E。附录E主要阐述如何撰写“exception-safe”代码,这个附录可不是写给初学者看的。一个关键技巧是“资源分配即初始化”——这种技巧通过“类的析构函数”给易造成混乱的“资源管理”带来了“秩序的曙光”。


Q: 我如何从标准输入中读取string?

A: 如果要读以空白结束的单个单词,可以这样:
 #include<iostream> #include<string> using namespace std; int main() { cout << "Please enter a word:/n"; string s; cin>>s; cout << "You entered " << s << '/n'; } 



请注意,这里没有显式的内存管理代码,也没有限制尺寸而可能会不小心溢出的缓冲区。 [译注:似乎Bjarne常骄傲地宣称这点——因为这是string乃至整个标准库带来的重大好处之一,确实值得自豪;而在老的C语言中,最让程序员抱怨的也是内置字符串类型的缺乏以及由此引起的“操作字符串所需要之复杂内存管理措施”所带来的麻烦。Bjarne一定在得意地想,“哈,我的叫C++的小baby终于长大了,趋向完美了!” :O) ]

如果你需要一次读一整行,可以这样:

	#include<iostream>
	#include<string>
	using namespace std;

	int main()
	{
		cout << "Please enter a line:/n";

		string s;
		getline(cin, s);


	
		cout << "You entered " << s << '/n';
	}


关于标准库所提供之功能的简介(诸如iostream,stream),参见《The C++ Programming Language》第三版的第三章。如果想看C和C++的输入输出功能使用之具体比较,参看我的《Learning Standard C++ as a New Language》一文。


Q: 为何C++不提供“finally”结构?

A: 因为C++提供了另一种机制,完全可以取代finally,而且这种机制几乎总要比finally工作得更好:就是——“分配资源即初始化”。(见《The C++ Programming Language》14.4节)基本的想法是,用一个局部对象来封装一个资源,这样一来局部对象的析构函数就可以自动释放资源。这样,程序员就不会“忘记释放资源”了。 [译注:因为C++的对象“生命周期”机制替他记住了 :O) ] 下面是一个例子:
	class File_handle {
		FILE* p;
	public:
		File_handle(const char* n, const char* a)
			{ p = fopen(n,a); if (p==0) throw Open_error(errno); }
		File_handle(FILE* pp)
			{ p = pp; if (p==0) throw Open_error(errno); }

		~File_handle() { fclose(p); }

		operator FILE*() { return p; }

		// ...
	};

	void f(const char* fn)
	{
		File_handle f(fn,"rw");	// open fn for reading and writing
		// use file through f
	}


在一个系统中,每一样资源都需要一个“资源局柄”对象,但我们不必为每一个资源都写一个“finally”语句。在实作的系统中,资源的获取和释放的次数远远多于资源的种类,所以“资源分配即初始化”机制产生的代码要比“finally”机制少。
[译注:Object Pascal,Java,C#等语言都有finally语句块,常用于发生异常时对被分配资源的资源的处理——这意味着有多少次分配资源就有多少finally语句块(少了一个finally就意味着有一些资源分配不是“exception safe”的);而“资源分配即初始化”机制将原本放在finally块中的代码移到了类的析构函数中。我们只需为每一类资源提供一个封装类即可。需代码量孰多孰少?除非你的系统中每一类资源都只被使用一次——这种情况下代码量是相等的;否则永远是前者多于后者 :O) ]

另外,请看看《The C++ Programming Language》附录E中的资源管理例子。


Q: 那个auto_ptr是什么东东啊?为什么没有auto_array?

A: 哦,auto_ptr是一个很简单的资源封装类,是在<memory>头文件中定义的。它使用“资源分配即初始化”技术来保证资源在发生异常时也能被安全释放(“exception safety”)。一个auto_ptr封装了一个指针,也可以被当作指针来使用。当其生命周期到了尽头,auto_ptr会自动释放指针。例如:
	#include<memory>
	using namespace std;

	struct X {
		int m;
		// ..
	};

	void f()
	{
		auto_ptr<X> p(new X);
		X* q = new X;

		p->m++;		// use p just like a pointer
		q->m++;
		// ...

		delete q;
	}

如果在代码用// ...标注的地方抛出异常,那么p会被正常删除——这个功劳应该记在auto_ptr的析构函数头上。不过,q指向的X类型对象就没有被释放(因为不是用auto_ptr定义的)。详情请见《The C++ Programming Language》14.4.2节。

Auto_ptr是一个轻量级的类,没有引入引用计数机制。如果你把一个auto_ptr(比如,ap1)赋给另一个auto_ptr(比如,ap2),那么ap2将持有实际指针,而ap1将持有零指针。例如:

	#include<memory>
	#include<iostream>
	using namespace std;

	struct X {
		int m;
		// ..
	};

	int main()
	{
		auto_ptr<X> p(new X);
		auto_ptr<X> q(p);
		cout << "p " << p.get() << " q " << q.get() << "/n";
	}


运行结果应该是先显示一个零指针,然后才是一个实际指针,就像这样:
	p 0x0 q 0x378d0


auto_ptr::get()返回实际指针。

这里,语义似乎是“转移”,而非“拷贝”,这或许有点令人惊讶。特别要注意的是,不要把auto_ptr作为标准容器的参数——标准容器要求通常的拷贝语义。例如:

	std::vector<auto_ptr<X> >v;	// error


一个auto_ptr只能持有指向单个元素的指针,而不是数组指针:

	void f(int n)
	{
		auto_ptr<X> p(new X[n]);	// error
		// ...
	}


上述代码会出错,因为析构函数是使用delete而非delete[]来释放指针的,所以后面的n-1个X没有被释放。

那么,看来我们应该用一个使用delete[]来释放指针的,叫auto_array的类似东东来放数组了?哦,不,不,没有什么auto_array。理由是,不需要有啊——我们完全可以用vector嘛:

	void f(int n)
	{
		vector<X> v(n);
		// ...
	}


如果在 // ... 部分发生了异常,v的析构函数会被自动调用。


Q: C和C++风格的内存分配/释放可以 混用 吗?

A: 可以——从你可在一个程序中同时使用malloc()和new的意义上而言。

不可以——从你无法delete一个以malloc()分配而来之对象的意义上而言。你也无法free()或realloc()一个由new分配而来的对象。

C++的new和delete运算符确保构造和析构正常发生,但C风格的malloc()、calloc()、free()和realloc()可不保证这点。而且,没有任何人能向你担保,new/delete和malloc/free所掌控的内存是相互“兼容”的。如果在你的代码中,两种风格混用而没有给你造成麻烦,那我只能说:直到目前为止,你是非常幸运的 :O)

如果你因为思念“美好的老realloc()”(许多人都思念她)而无法割舍整个古老的C内存分配机制(爱屋及乌?),那么考虑使用标准库中的vector吧。例如:

	// read words from input into a vector of strings:

	vector<string> words;
	string s;
	while (cin>>s && s!=".") words.push_back(s);


Vector会按需要自动增长的。

我的《Learning Standard C++ as a New Language》一文中给出了其它例子,可以参考。


Q: 想从void *转换, 为什么 必须使用换型符?

A: 在C中,你可以隐式转换,但这是不安全的,例如:

 #include<stdio.h> int main() { char i = 0; char j = 0; char* p = &i; void* q = p; 

 int* pp = q; /* unsafe, legal C, not C++ */  printf("%d %d/n",i,j);  

 *pp = -1; /* overwrite memory starting at &i */  printf("%d %d/n",i,j); }  



如果你使用T*类型的指针,该指针却不指向T类型的对象,后果可能是灾难性的;所以在C++中如果你要将void*换型为T*,你必须使用显式换型:
 int* pp = (int*)q; 



或者,更好的是,使用新的换型符,以使换型操作更为醒目:
	int* pp = static_cast<int*>(q);




当然,最好的还是——不要换型。

在C中一类最常见的不安全换型发生在将malloc()分配而来的内存赋给某个指针之时,例如:

	int* p = malloc(sizeof(int));


在C++中,应该使用类型安全的new操作符:
	int* p = new int;


而且,new还有附带的好处:
  • new不会“偶然”地分配错误大小的内存
  • new自动检查内存是否已枯竭
  • new支持初始化
例如:
	typedef std::complex<double> cmplx;

	/* C style: */
	cmplx* p = (cmplx*)malloc(sizeof(int));	/* error: wrong size */
							/* forgot to test for p==0 */
	if (*p == 7) { /* ... */ }			/* oops: forgot to initialize *p */

	// C++ style:
	cmplx* q = new cmplx(1,2); // will throw bad_alloc if memory is exhausted
	if (*q == 7) { /* ... */ }



A: 如何在类中定义常量?

Q: 如果你想得到一个可用于常量表达式中的常量,例如数组大小的定义,那么你有两种选择:

	class X {
 static const int c1 = 7;


 enum { c2 = 19 };



		char v1[c1

];
		char v2[c2

];

		// ...
	};

一眼望去,c1的定义似乎更加直截了当,但别忘了只有static的整型或枚举型量才能如此初始化。这就很有局限性,例如:

	class Y {
		const int c3 = 7;		// error: not static
		static int c4 = 7;		// error: not const
		static const float c5 = 7;	// error not integral
	};

我还是更喜欢玩“enum戏法”,因为这种定义可移植性好,而且不会引诱我去使用非标准的“类内初始化”扩展语法。

那么,为何要有这些不方便的限制?因为类通常声明在头文件中,而头文件往往被许多单元所包含。[所以,类可能会被重复声明。] 但是,为了避免链接器设计的复杂化,C++要求每个对象都只能被定义一次。如果C++允许类内定义要作为对象被存在内存中的实体,那么这项要求就无法满足了。关于C++设计时的一些折衷,参见《The Design and Evolution of C++》。

如果这个常量不需要被用于常量表达式,那么你的选择余地就比较大了:

 class Z { static char* p; // initialize in definition const int i; // initialize in constructor public: Z(int ii) :i(ii) { } }; 

 char* Z::p = "hello, there";  



只有当static成员是在类外被定义的,你才可以获取它的地址,例如:
	class AE {
		// ...
	public:
		static const int c6 = 7;
		static const int c7 = 31;
	};

	const int AE::c7;

	// definition

	int f()
	{
		const int* p1 = &AE::c6;	// error: c6 not an lvalue
		const int* p2 = &AE::c7;	// ok
		// ...
	}



Q: 为何delete操作不把指针置零?

A: 嗯,问得挺有道理的。我们来看:
	delete p;
	// ...
	delete p;


如果代码中的//...部分没有再次给p分配内存,那么这段代码就对同一片内存释放了两次。这是个严重的错误,可惜C++无法有效地阻止你写这种代码。不过,我们都知道,释放空指针是无危害的,所以如果在每一个delete p;后面都紧接一个p = 0;,那么两次释放同一片内存的错误就不会发生了。尽管如此,在C++中没有任何语法可以强制程序员在释放指针后立刻将该指针归零。所以,看来避免犯这样的错误的重任只能全落在程序员肩上了。或许,delete自动把指针归零真是个好主意?

哦,不不,这个主意不够“好”。一个理由是,被delete的指针未必是左值。我们来看:

	delete p+1;
	delete f(x);


你让delete把什么自动置零?也许这样的例子不常见,但足可证明“delete自动把指针归零”并不保险。[译注:事实上,我们真正想要的是:“任何指向被释放的内存区域的指针都被自动归零”——但可惜除了Garbage Collector外没什么东东可以做到这点。] 再来看个简单例子:
	T* p = new T;
	T* q = p;
	delete p;
	delete q;	// ouch!


C++标准其实允许编译器实作为“自动把传给delete的左值置零”,我也希望编译器厂商这样做,但看来厂商们并不喜欢这样。一个理由就是上述例子——第3行语句如果delete把p自动置零了又如何呢?q又没有被自动置零,第4行照样出错。

如果你觉得释放内存时把指针置零很重要,那么不妨写这样一个destroy函数:

	template<class T> inline void destroy(T*& p) { delete p; p = 0; }


不妨把delete带来的麻烦看作“尽量少用new/delete,多用标准库中的容器”之另一条理由吧 :O)

请注意,把指针作为引用传递(以便delete可以把指针置零)会带来额外的效益——防止右值被传递给destroy() :

	int* f();
	int* p;
	// ...
	destroy(f());	// error: trying to pass an rvalue by non-const reference
	destroy(p+1);	// error: trying to pass an rvalue by non-const reference



Q: 我可以写"void main()"吗?

A: 这样的定义
	void main() { /* ... */ }


不是C++,也不是C。(参见ISO C++ 标准 3.6.1[2] 或 ISO C 标准 5.1.2.2.1) 一个遵从标准的编译器实作应该接受
	int main() { /* ... */ }


	int main(int argc, char* argv[]) { /* ... */ }


编译器也可以提供main()的更多重载版本,不过它们都必须返回int,这个int是返回给你的程序的调用者的,这是种“负责”的做法,“什么都不返回”可不大好哦。如果你程序的调用者不支持用“返回值”来交流,这个值会被自动忽略——但这也不能使void main()成为合法的C++或C代码。即使你的编译器支持这种定义,最好也不要养成这种习惯——否则你可能被其他C/C++认为浅薄无知哦。
在C++中,如果你嫌麻烦,可以不必显式地写出return语句。编译器会自动返回0。例如:
	#include<iostream>

	int main()
	{
		std::cout << "This program returns the integer value 0/n";
	}


麻烦吗?不麻烦,int main()比void main()还少了一个字母呢 :O)另外,还要请你注意:无论是ISO C++还是C99都不允许你省略返回类型定义。这也就是说,和C89及ARM C++[译注:指Margaret Ellis和Bjarne Stroustrup于1990年合著的《The Annotated C++ Reference Manual》中描述的C++] 不同,int并不是缺省返回值。所以,
	#include<iostream>

	main() { /* ... */ }

会出错,因为main()函数缺少返回类型。


Q: 为何我不能重载“.”、“::”和“sizeof”等操作符?

A: 大部分的操作符是可以被重载的,例外的只有“.”、“::”、“?:”和“sizeof”。没有什么非禁止operator?:重载的理由,只不过没有必要而已。另外,expr1?expr2:expr3的重载函数无法保证expr2和expr3中只有一个被执行。

而“sizeof”无法被重载是因为不少内部操作,比如指针加法,都依赖于它,例如:

	X a[10];
	X* p = &a[3];
	X* q = &a[3];
	p++;	// p points to a[4]


		// thus the integer value of p must be


		// sizeof(X) larger than the integer value of q




这样,sizeof(X)无法在不违背基本语言规则的前提下表达什么新的语义。

在N::m中,N和m都不是表达式,它们只是编译器“认识”的名字,“::”执行的实际操作是编译时的名字域解析,并没有表达式的运算牵涉在内。或许有人会觉得重载一个“x::y”(其中x是实际对象,而非名字域或类名)是一个好主意,但这样做引入了新的语法[译注:重载的本意是让操作符可以有新的语义,而不是更改语法——否则会引起混乱] ,我可不认为新语法带来的复杂性会给我们什么好处。

原则上来说,“.”运算符是可以被重载的,就像“->”一样。不过,这会带来语义的混淆——我们到底是想和“.”后面的对象打交道呢,还是“.”后面的东东所实际指向的实体打交道呢?看看这个例子(它假设“.”重载是可以的):

 class Y { public: void f(); // ... }; class X { // assume that you can overload . Y* p; Y& operator.() { return *p; } void f(); // ... };  



	void g(X& x)
	{
		x.f();	// X::f or Y::f or error?
	}


这个问题有好几种解决方案。在C++标准化之时,何种方案为佳并不明显。细节请参见《The Design and Evolution of C++》。


Q: 我怎样才能把整数转化为字符串?

A: 最简单的方法是使用stringstream :
	#include<iostream>
	#include<string>
	#include<sstream>
	using namespace std;

	string itos(int i)	// convert int to string
	{
		stringstream s;
		s << i

;
		return s.str();
	}

	int main()
	{
		int i = 127;
		string ss = itos(i);
		const char* p = ss.c_str();

		cout << ss << " " << p << "/n";
	}


当然,很自然地,你可以用这种方法来把任何可通过“<<”输出的类型转化为string。想知道string流的细节吗?参见《The C++ Programming Language》,21.5.3节。


Q: “int* p;”和“int *p;”,到底哪个正确?

A: 如果让计算机来读,两者完全等同,都是正确的。我们还可以声明成“int * p”或“int*p”。编译器不会理会你是不是在哪里多放了几个空格。

不过如果让人来读,两者的含义就有所不同了。代码的书写风格是很重要的。C风格的表达式和声明式常被看作比“necessary evil”[译注:“必要之恶”,意指为了达到某种目的而不得不付出的代价。例如有人认为环境的破坏是经济发展带来的“necessary evil”] 更糟的东西,而C++则很强调类型。所以,“int *p”和“int* p”之间并无对错之分,只有风格之争。

一个典型的C程序员会写“int *p”,而且振振有词地告诉你“这表示‘*p是一个int’”——听上去挺有道理的。这里,*和p绑在了一起——这就是C的风格。这种风格强调的是语法。

而一个典型的C++程序员会写“int* p”,并告诉你“p是一个指向int的指针,p的类型是int*”。这种风格强调的是类型。当然,我喜欢这种风格 :O) 而且,我认为,类型是非常重要的概念,我们应该注重类型。它的重要性丝毫不亚于C++语言中的其它“较为高级的部分”。[译注:诸如RTTI,各种cast,template机制等,可称为“较高级的部分”了吧,但它们其实也是类型概念的扩展和运用。我曾写过两篇谈到C++和OOP的文章发表在本刊上,文中都强调了理解“类型”之重要性。我还曾译过Object Unencapsulated (这本书由作者先前所著在网上广为流传的C++?? A Critique修订而来)中讲类型的章节,这本书的作者甚至称Object Oriented Programming应该正名为Type Oriented Programming——“面向类型编程”!这有点矫枉过正了,但类型确是编程语言之核心部分。]

当声明单个变量时,int *和int*的差别并不是特别突出,但当我们要一次声明多个变量时,易混淆之处就全暴露出来了:

	int* p, p1;	// probable error: p1 is not an int*


这里,p1的类型到底是int还是int *呢?把*放得离p近一点也同样不能澄清问题:
	int *p, p1;	// probable error?


看来为了保险起见,只好一次声明一个变量了——特别是当声明伴随着初始化之时。[译注:本FAQ中凡原文为declare/declaration的均译为声明;define/definition均译为定义。通常认为,两者涵义之基本差别是:“声明”只是为编译器提供信息,让编译器在符号表中为被声明的符号(比如类型名,变量名,函数名等)保留位置,而不用指明该符号所对应的具体语义——即:没有任何内存空间的分配或者实际二进制代码的生成。而“定义”则须指明语义——如果把“声明”比作在辞典中为一个新词保留条目;那么“定义”就好比在条目中对这个词的意思、用法给出详细解释。当我们说一个C++语句是“定义”,那么编译器必定会为该语句产生对应的机器指令或者分配内存,而被称为“声明”的语句则不会被编译出任何实际代码。从这个角度而言,原文中有些地方虽作者写的是“对象、类、类型的声明(declaration)”,但或许改译为“定义”较符合我们的理解。不过本译文还是采用忠于原文的译法,并不按照我的理解而加以更改。特此说明。另外,译文中凡涉及我个人对原文的理解、补充之部分均以译注形式给出,供读者参考。] 人们一般不太可能写出像这样的代码:
	int* p = &i;
	int p1 = p;	// error: int initialized by int*


如果真的有人这样写,编译器也不会同意——它会报错的。

每当达到某种目的有两条以上途径,就会有些人被搞糊涂;每当一些选择是出于个人喜好,争论就会无休无止。坚持一次只声明一个指针并在声明时顺便初始化,困扰我们已久的混淆之源就会随风逝去。如果你想了解有关C的声明语法的更多讨论,参见《The Design and Evolution of C++》 。


Q: 何种代码布局风格为佳?

A: 哦,这是个人品味问题了。人们常常很重视代码布局之风格,但或许风格的一致性要比选择何种风格更重要。如果非要我为我的个人偏好建立“逻辑证明”,和别人一样,我会头大的 :O)

我个人喜欢使用“K&R”风格,如果算上那些C语言中不存在的构造之使用惯例,那么人们有时也称之为“Stroustrup”风格。例如:

	class C : public B {
	public:
		// ...
	};

	void f(int* p, int max)
	{
		if (p) {
			// ...
		}
	
		for (int i = 0; i<max; ++i) {
			// ...
		}
	}


这种风格比较节省“垂直空间”——我喜欢让尽量多的内容可以显示在一屏上 :O) 而函数定义开始的花括号之所以如此放置,是因为这样一来就和类定义区分开来,我就可以一眼看出:噢,这是函数!

正确的缩进非常重要。

一些设计问题,比如使用抽象类来表示重要的界面、使用模板来表示灵活而可扩展的类型安全抽象、正确使用“异常”来表示错误,远远要比代码风格重要。

[译注:《The Practice of Programming》中有一章对“代码风格”问题作了详细的阐述。]


Q: 我该把const写在类型前面还是后面?

A: 我是喜欢写在前面的。不过这只是个人口味的问题。“const T”和“T const”均是允许的,而且它们是等价的。例如:
	const int a = 1;	// ok
	int const b = 2;	// also ok


我想,使用第一种写法更合乎语言习惯,比较不容易让人迷惑 :O)

为什么会这样?当我发明“const”(最早是被命名为“readonly”且有一个叫“writeonly”的对应物)时,我让它在前面和后面都行,因为这不会带来二义性。当时的C/C++编译器对修饰符很少有强加的语序规则。

我不记得当时有过什么关于语序的深思熟虑或相关的争论。一些早期的C++使用者(特别是我)当时只是单纯地觉得const int c = 10;要比int const c = 10;好看而已。或许,我是受了这件事实的影响:许多我早年写的例子是用“readonly”修饰的,而readonly int c = 10;确实看上去要比int readonly c = 10;舒服。而最早的使用“const”的C/C++代码是我用全局查找替换功能把readonly换成const而来的。我还记得和几个人讨论过关于语法“变体”问题,包括Dennis Ritchie。不过我不记得当时我们谈的是哪几种语言了。

另外,请注意:如果指针本身不可被修改,那么const应该放在“*”的后面。例如:

	int *const p1 = q;	// constant pointer to int variable
	int const* p2 = q;	// pointer to constant int
	const int* p3 = q;	// pointer to constant int



Q: 宏有什么不好吗?

A: 宏不遵循C++的作用域和类型规则,这会带来许多麻烦。因此,C++提供了能和语言其它部分“合作愉快”的替代机制,比如内联函数、模板、名字空间机制。让我们来看这样的代码:
	#include "someheader.h"
struct S {
int alpha;
int beta;
};

如果有人(不明智地)写了一个叫“alpha”或者“beta”的宏,那么这段代码无法通过编译,甚至可能更糟——编译出一些你未曾预料的结果。比方说:如果“someheader.h”包含了如下定义:
	#define alpha 'a'
	#define beta b[2]

那么前面的代码就完全背离本意了。

把宏(而且只有宏)的名称全部用大写字母表示确实有助于缓解问题,但宏是没有语言级保护机制的。例如,在以上例子中alpha和beta在S的作用域中,是S的成员变量,但这对于宏毫无影响。宏的展开是在编译前进行的,展开程序只是把源文件看作字符流而已。这也是C/C++程序设计环境的欠缺之处:计算机和电脑眼中的源文件的涵义是不同的。

不幸的是,你无法确保其他程序员不犯你所认为的“愚蠢的”错误。比方说,近来有人告诉我,他们遇到一个含“goto”语句的宏。我见到过这样的代码,也听到过这样的论点——有时宏中的“goto”是有用的。例如:

	#define prefix get_ready(); int ret__
	#define Return(i) ret__=i; do_something(); goto exit
	#define suffix exit: cleanup(); return ret__

	void f()
	{
		prefix;
		// ...
		Return(10);
		// ...
		Return(x++);
		//...
		suffix;
	}


如果你是一个负责维护的程序员,这样的代码被提交到你面前,而宏定义(为了给这个“戏法”增加难度而)被藏到了一个头文件中(这种情况并非罕见),你作何感想?是不是一头雾水?

一个常见而微妙的问题是,函数风格的宏不遵守函数参数调用规则。例如:

	#define square(x) (x*x)

	void f(double d, int i)
	{
		square(d);	// fine
		square(i++);	// ouch: means (i++*i++)
		square(d+1);	// ouch: means (d+1*d+1); that is, (d+d+1)
		// ...
	}


“d+1”的问题可以通过给宏定义加括号解决:
	#define square(x) ((x)*(x))	/* better */


但是,“i++”被执行两次的问题仍然没有解决。

我知道有些(其它语言中)被称为“宏”的东西并不象C/C++预处理器所处理的“宏”那样缺陷多多、麻烦重重,但我并不想改进C++的宏,而是建议你正确使用C++语言中的其他机制,比如内联函数、模板、构造函数、析构函数、异常处理等。


[译注:以上是Bjarne Stroustrup的C++ Style and Technique FAQ的全文翻译。Bjarne是丹麦人,他写的英文文章可不好读,技术文章尤甚。本译文或许错误偏颇之处不少,欢迎广大读者指正。我的email是 [email protected] 。]

原文:

http://www2.research.att.com/~bs/bstechfaq.htm

http://www2.research.att.com/~bs/bstechfaq.htm

========================================================================================

你可能感兴趣的:(style)