1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
|
<section xmlns="http://docbook.org/ns/docbook" version="5.0"
xml:id="std.util.memory.allocator" xreflabel="Allocator">
<?dbhtml filename="allocator.html"?>
<info><title>Allocators</title>
<keywordset>
<keyword>
ISO C++
</keyword>
<keyword>
allocator
</keyword>
</keywordset>
</info>
<para>
Memory management for Standard Library entities is encapsulated in a
class template called <classname>allocator</classname>. The
<classname>allocator</classname> abstraction is used throughout the
library in <classname>string</classname>, container classes,
algorithms, and parts of iostreams. This class, and base classes of
it, are the superset of available free store (<quote>heap</quote>)
management classes.
</para>
<section xml:id="allocator.req"><info><title>Requirements</title></info>
<para>
The C++ standard only gives a few directives in this area:
</para>
<itemizedlist>
<listitem>
<para>
When you add elements to a container, and the container must
allocate more memory to hold them, the container makes the
request via its <type>Allocator</type> template
parameter, which is usually aliased to
<type>allocator_type</type>. This includes adding chars
to the string class, which acts as a regular STL container in
this respect.
</para>
</listitem>
<listitem>
<para>
The default <type>Allocator</type> argument of every
container-of-T is <classname>allocator<T></classname>.
</para>
</listitem>
<listitem>
<para>
The interface of the <classname>allocator<T></classname> class is
extremely simple. It has about 20 public declarations (nested
typedefs, member functions, etc), but the two which concern us most
are:
</para>
<programlisting>
T* allocate (size_type n, const void* hint = 0);
void deallocate (T* p, size_type n);
</programlisting>
<para>
The <varname>n</varname> arguments in both those
functions is a <emphasis>count</emphasis> of the number of
<type>T</type>'s to allocate space for, <emphasis>not their
total size</emphasis>.
(This is a simplification; the real signatures use nested typedefs.)
</para>
</listitem>
<listitem>
<para>
The storage is obtained by calling <function>::operator
new</function>, but it is unspecified when or how
often this function is called. The use of the
<varname>hint</varname> is unspecified, but intended as an
aid to locality if an implementation so
desires. <constant>[20.4.1.1]/6</constant>
</para>
</listitem>
</itemizedlist>
<para>
Complete details can be found in the C++ standard, look in
<constant>[20.4 Memory]</constant>.
</para>
</section>
<section xml:id="allocator.design_issues"><info><title>Design Issues</title></info>
<para>
The easiest way of fulfilling the requirements is to call
<function>operator new</function> each time a container needs
memory, and to call <function>operator delete</function> each time
the container releases memory. This method may be <link xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://gcc.gnu.org/ml/libstdc++/2001-05/msg00105.html">slower</link>
than caching the allocations and re-using previously-allocated
memory, but has the advantage of working correctly across a wide
variety of hardware and operating systems, including large
clusters. The <classname>__gnu_cxx::new_allocator</classname>
implements the simple operator new and operator delete semantics,
while <classname>__gnu_cxx::malloc_allocator</classname>
implements much the same thing, only with the C language functions
<function>std::malloc</function> and <function>free</function>.
</para>
<para>
Another approach is to use intelligence within the allocator
class to cache allocations. This extra machinery can take a variety
of forms: a bitmap index, an index into an exponentially increasing
power-of-two-sized buckets, or simpler fixed-size pooling cache.
The cache is shared among all the containers in the program: when
your program's <classname>std::vector<int></classname> gets
cut in half and frees a bunch of its storage, that memory can be
reused by the private
<classname>std::list<WonkyWidget></classname> brought in from
a KDE library that you linked against. And operators
<function>new</function> and <function>delete</function> are not
always called to pass the memory on, either, which is a speed
bonus. Examples of allocators that use these techniques are
<classname>__gnu_cxx::bitmap_allocator</classname>,
<classname>__gnu_cxx::pool_allocator</classname>, and
<classname>__gnu_cxx::__mt_alloc</classname>.
</para>
<para>
Depending on the implementation techniques used, the underlying
operating system, and compilation environment, scaling caching
allocators can be tricky. In particular, order-of-destruction and
order-of-creation for memory pools may be difficult to pin down
with certainty, which may create problems when used with plugins
or loading and unloading shared objects in memory. As such, using
caching allocators on systems that do not support
<function>abi::__cxa_atexit</function> is not recommended.
</para>
</section>
<section xml:id="allocator.impl"><info><title>Implementation</title></info>
<section><info><title>Interface Design</title></info>
<para>
The only allocator interface that
is supported is the standard C++ interface. As such, all STL
containers have been adjusted, and all external allocators have
been modified to support this change.
</para>
<para>
The class <classname>allocator</classname> just has typedef,
constructor, and rebind members. It inherits from one of the
high-speed extension allocators, covered below. Thus, all
allocation and deallocation depends on the base class.
</para>
<para>
The base class that <classname>allocator</classname> is derived from
may not be user-configurable.
</para>
</section>
<section><info><title>Selecting Default Allocation Policy</title></info>
<para>
It's difficult to pick an allocation strategy that will provide
maximum utility, without excessively penalizing some behavior. In
fact, it's difficult just deciding which typical actions to measure
for speed.
</para>
<para>
Three synthetic benchmarks have been created that provide data
that is used to compare different C++ allocators. These tests are:
</para>
<orderedlist>
<listitem>
<para>
Insertion.
</para>
<para>
Over multiple iterations, various STL container
objects have elements inserted to some maximum amount. A variety
of allocators are tested.
Test source for <link xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://gcc.gnu.org/viewcvs/trunk/libstdc%2B%2B-v3/testsuite/performance/23_containers/insert/sequence.cc?view=markup">sequence</link>
and <link xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://gcc.gnu.org/viewcvs/trunk/libstdc%2B%2B-v3/testsuite/performance/23_containers/insert/associative.cc?view=markup">associative</link>
containers.
</para>
</listitem>
<listitem>
<para>
Insertion and erasure in a multi-threaded environment.
</para>
<para>
This test shows the ability of the allocator to reclaim memory
on a per-thread basis, as well as measuring thread contention
for memory resources.
Test source
<link xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://gcc.gnu.org/viewcvs/trunk/libstdc%2B%2B-v3/testsuite/performance/23_containers/insert_erase/associative.cc?view=markup">here</link>.
</para>
</listitem>
<listitem>
<para>
A threaded producer/consumer model.
</para>
<para>
Test source for
<link xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://gcc.gnu.org/viewcvs/trunk/libstdc++-v3/testsuite/performance/23_containers/producer_consumer/sequence.cc?view=markup">sequence</link>
and
<link xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://gcc.gnu.org/viewcvs/trunk/libstdc++-v3/testsuite/performance/23_containers/producer_consumer/associative.cc?view=markup">associative</link>
containers.
</para>
</listitem>
</orderedlist>
<para>
The current default choice for
<classname>allocator</classname> is
<classname>__gnu_cxx::new_allocator</classname>.
</para>
</section>
<section><info><title>Disabling Memory Caching</title></info>
<para>
In use, <classname>allocator</classname> may allocate and
deallocate using implementation-specified strategies and
heuristics. Because of this, every call to an allocator object's
<function>allocate</function> member function may not actually
call the global operator new. This situation is also duplicated
for calls to the <function>deallocate</function> member
function.
</para>
<para>
This can be confusing.
</para>
<para>
In particular, this can make debugging memory errors more
difficult, especially when using third party tools like valgrind or
debug versions of <function>new</function>.
</para>
<para>
There are various ways to solve this problem. One would be to use
a custom allocator that just called operators
<function>new</function> and <function>delete</function>
directly, for every allocation. (See
<filename>include/ext/new_allocator.h</filename>, for instance.)
However, that option would involve changing source code to use
a non-default allocator. Another option is to force the
default allocator to remove caching and pools, and to directly
allocate with every call of <function>allocate</function> and
directly deallocate with every call of
<function>deallocate</function>, regardless of efficiency. As it
turns out, this last option is also available.
</para>
<para>
To globally disable memory caching within the library for the
default allocator, merely set
<constant>GLIBCXX_FORCE_NEW</constant> (with any value) in the
system's environment before running the program. If your program
crashes with <constant>GLIBCXX_FORCE_NEW</constant> in the
environment, it likely means that you linked against objects
built against the older library (objects which might still using the
cached allocations...).
</para>
</section>
</section>
<section xml:id="allocator.using"><info><title>Using a Specific Allocator</title></info>
<para>
You can specify different memory management schemes on a
per-container basis, by overriding the default
<type>Allocator</type> template parameter. For example, an easy
(but non-portable) method of specifying that only <function>malloc</function> or <function>free</function>
should be used instead of the default node allocator is:
</para>
<programlisting>
std::list <int, __gnu_cxx::malloc_allocator<int> > malloc_list;</programlisting>
<para>
Likewise, a debugging form of whichever allocator is currently in use:
</para>
<programlisting>
std::deque <int, __gnu_cxx::debug_allocator<std::allocator<int> > > debug_deque;
</programlisting>
</section>
<section xml:id="allocator.custom"><info><title>Custom Allocators</title></info>
<para>
Writing a portable C++ allocator would dictate that the interface
would look much like the one specified for
<classname>allocator</classname>. Additional member functions, but
not subtractions, would be permissible.
</para>
<para>
Probably the best place to start would be to copy one of the
extension allocators: say a simple one like
<classname>new_allocator</classname>.
</para>
</section>
<section xml:id="allocator.ext"><info><title>Extension Allocators</title></info>
<para>
Several other allocators are provided as part of this
implementation. The location of the extension allocators and their
names have changed, but in all cases, functionality is
equivalent. Starting with gcc-3.4, all extension allocators are
standard style. Before this point, SGI style was the norm. Because of
this, the number of template arguments also changed. Here's a simple
chart to track the changes.
</para>
<para>
More details on each of these extension allocators follows.
</para>
<orderedlist>
<listitem>
<para>
<classname>new_allocator</classname>
</para>
<para>
Simply wraps <function>::operator new</function>
and <function>::operator delete</function>.
</para>
</listitem>
<listitem>
<para>
<classname>malloc_allocator</classname>
</para>
<para>
Simply wraps <function>malloc</function> and
<function>free</function>. There is also a hook for an
out-of-memory handler (for
<function>new</function>/<function>delete</function> this is
taken care of elsewhere).
</para>
</listitem>
<listitem>
<para>
<classname>array_allocator</classname>
</para>
<para>
Allows allocations of known and fixed sizes using existing
global or external storage allocated via construction of
<classname>std::tr1::array</classname> objects. By using this
allocator, fixed size containers (including
<classname>std::string</classname>) can be used without
instances calling <function>::operator new</function> and
<function>::operator delete</function>. This capability
allows the use of STL abstractions without runtime
complications or overhead, even in situations such as program
startup. For usage examples, please consult the testsuite.
</para>
</listitem>
<listitem>
<para>
<classname>debug_allocator</classname>
</para>
<para>
A wrapper around an arbitrary allocator A. It passes on
slightly increased size requests to A, and uses the extra
memory to store size information. When a pointer is passed
to <function>deallocate()</function>, the stored size is
checked, and <function>assert()</function> is used to
guarantee they match.
</para>
</listitem>
<listitem>
<para>
<classname>throw_allocator</classname>
</para>
<para>
Includes memory tracking and marking abilities as well as hooks for
throwing exceptions at configurable intervals (including random,
all, none).
</para>
</listitem>
<listitem>
<para>
<classname>__pool_alloc</classname>
</para>
<para>
A high-performance, single pool allocator. The reusable
memory is shared among identical instantiations of this type.
It calls through <function>::operator new</function> to
obtain new memory when its lists run out. If a client
container requests a block larger than a certain threshold
size, then the pool is bypassed, and the allocate/deallocate
request is passed to <function>::operator new</function>
directly.
</para>
<para>
Older versions of this class take a boolean template
parameter, called <varname>thr</varname>, and an integer template
parameter, called <varname>inst</varname>.
</para>
<para>
The <varname>inst</varname> number is used to track additional memory
pools. The point of the number is to allow multiple
instantiations of the classes without changing the semantics at
all. All three of
</para>
<programlisting>
typedef __pool_alloc<true,0> normal;
typedef __pool_alloc<true,1> private;
typedef __pool_alloc<true,42> also_private;
</programlisting>
<para>
behave exactly the same way. However, the memory pool for each type
(and remember that different instantiations result in different types)
remains separate.
</para>
<para>
The library uses <emphasis>0</emphasis> in all its instantiations. If you
wish to keep separate free lists for a particular purpose, use a
different number.
</para>
<para>The <varname>thr</varname> boolean determines whether the
pool should be manipulated atomically or not. When
<varname>thr</varname> = <constant>true</constant>, the allocator
is thread-safe, while <varname>thr</varname> =
<constant>false</constant>, is slightly faster but unsafe for
multiple threads.
</para>
<para>
For thread-enabled configurations, the pool is locked with a
single big lock. In some situations, this implementation detail
may result in severe performance degradation.
</para>
<para>
(Note that the GCC thread abstraction layer allows us to provide
safe zero-overhead stubs for the threading routines, if threads
were disabled at configuration time.)
</para>
</listitem>
<listitem>
<para>
<classname>__mt_alloc</classname>
</para>
<para>
A high-performance fixed-size allocator with
exponentially-increasing allocations. It has its own
documentation, found <link linkend="manual.ext.allocator.mt">here</link>.
</para>
</listitem>
<listitem>
<para>
<classname>bitmap_allocator</classname>
</para>
<para>
A high-performance allocator that uses a bit-map to keep track
of the used and unused memory locations. It has its own
documentation, found <link linkend="manual.ext.allocator.bitmap">here</link>.
</para>
</listitem>
</orderedlist>
</section>
<bibliography xml:id="allocator.biblio"><info><title>Bibliography</title></info>
<biblioentry>
<citetitle>
ISO/IEC 14882:1998 Programming languages - C++
</citetitle>
<abbrev>
isoc++_1998
</abbrev>
<pagenums>20.4 Memory</pagenums>
</biblioentry>
<biblioentry>
<biblioid xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://www.drdobbs.com/cpp/184403759" class="uri">
</biblioid>
<citetitle>
The Standard Librarian: What Are Allocators Good For?
</citetitle>
<author><personname><firstname>Matt</firstname><surname>Austern</surname></personname></author>
<publisher>
<publishername>
C/C++ Users Journal
</publishername>
</publisher>
</biblioentry>
<biblioentry>
<biblioid xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://www.cs.umass.edu/~emery/hoard/" class="uri">
</biblioid>
<citetitle>
The Hoard Memory Allocator
</citetitle>
<author><personname><firstname>Emery</firstname><surname>Berger</surname></personname></author>
</biblioentry>
<biblioentry>
<biblioid xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://www.cs.umass.edu/~emery/pubs/berger-oopsla2002.pdf" class="uri">
</biblioid>
<citetitle>
Reconsidering Custom Memory Allocation
</citetitle>
<author><personname><firstname>Emery</firstname><surname>Berger</surname></personname></author>
<author><personname><firstname>Ben</firstname><surname>Zorn</surname></personname></author>
<author><personname><firstname>Kathryn</firstname><surname>McKinley</surname></personname></author>
<copyright>
<year>2002</year>
<holder>OOPSLA</holder>
</copyright>
</biblioentry>
<biblioentry>
<biblioid xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://www.angelikalanger.com/Articles/C++Report/Allocators/Allocators.html" class="uri">
</biblioid>
<citetitle>
Allocator Types
</citetitle>
<author><personname><firstname>Klaus</firstname><surname>Kreft</surname></personname></author>
<author><personname><firstname>Angelika</firstname><surname>Langer</surname></personname></author>
<publisher>
<publishername>
C/C++ Users Journal
</publishername>
</publisher>
</biblioentry>
<biblioentry>
<citetitle>The C++ Programming Language</citetitle>
<author><personname><firstname>Bjarne</firstname><surname>Stroustrup</surname></personname></author>
<copyright>
<year>2000</year>
<holder/>
</copyright>
<pagenums>19.4 Allocators</pagenums>
<publisher>
<publishername>
Addison Wesley
</publishername>
</publisher>
</biblioentry>
<biblioentry>
<citetitle>Yalloc: A Recycling C++ Allocator</citetitle>
<author><personname><firstname>Felix</firstname><surname>Yen</surname></personname></author>
</biblioentry>
</bibliography>
</section>
|