<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Performance | Ziji's Homepage</title><link>https://zijishi.xyz/tag/performance/</link><atom:link href="https://zijishi.xyz/tag/performance/index.xml" rel="self" type="application/rss+xml"/><description>Performance</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><copyright>Ziji Shi © 2025</copyright><lastBuildDate>Wed, 19 Feb 2025 23:20:07 +0800</lastBuildDate><item><title>Understanding Performance of C++ from the Implementation Perspective (2) : STL Containers</title><link>https://zijishi.xyz/post/cpp/an-incomplete-intro-to-stl-from-implementation-perspective/</link><pubDate>Wed, 19 Feb 2025 23:20:07 +0800</pubDate><guid>https://zijishi.xyz/post/cpp/an-incomplete-intro-to-stl-from-implementation-perspective/</guid><description>&lt;p&gt;One thing about C++ that attracts me is the flexibility to control the program. Therefore, I am writing a series of blogs based on my understanding of highly efficient C++ code. They come from various sources, including CppCon videos, StackOverflow, and the C++ standard. I have also benchmarked some of the claims. I hope this will be helpful to you.&lt;/p&gt;
&lt;h2 id="rule-of-thumb"&gt;Rule of Thumb&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;If &lt;code&gt;unordered&lt;/code&gt; exists in an STL container name, it is almost certainly implemented via a hash table.&lt;/li&gt;
&lt;li&gt;When in doubt, use &lt;code&gt;vector&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;hr&gt;
&lt;h2 id="associative-containers-an-overview"&gt;Associative Containers: An Overview&lt;/h2&gt;
&lt;p&gt;STL provides two families of associative containers:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Container&lt;/th&gt;
&lt;th&gt;Implementation&lt;/th&gt;
&lt;th&gt;Ordered?&lt;/th&gt;
&lt;th&gt;Average Lookup&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;set&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Red-black tree&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;O(log n)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;map&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Red-black tree&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;O(log n)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;unordered_set&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Hash table&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;O(1)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;unordered_map&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Hash table&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;O(1)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The core trade-off is &lt;strong&gt;order vs. speed&lt;/strong&gt;. Tree-based containers maintain sorted order and guarantee O(log n) worst-case; hash-based containers give O(1) average but have no ordering and worst-case O(n) on hash collisions.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="unordered_set-and-unordered_map"&gt;&lt;code&gt;unordered_set&lt;/code&gt; and &lt;code&gt;unordered_map&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;Both are implemented as a &lt;strong&gt;hash table&lt;/strong&gt; (open addressing or separate chaining depending on the STL implementation; most use separate chaining with a bucket array).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Complexity:&lt;/strong&gt;&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Operation&lt;/th&gt;
&lt;th&gt;Average&lt;/th&gt;
&lt;th&gt;Worst case&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Insert&lt;/td&gt;
&lt;td&gt;O(1)&lt;/td&gt;
&lt;td&gt;O(n)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Lookup&lt;/td&gt;
&lt;td&gt;O(1)&lt;/td&gt;
&lt;td&gt;O(n)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Delete&lt;/td&gt;
&lt;td&gt;O(1)&lt;/td&gt;
&lt;td&gt;O(n)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The worst case occurs when all keys hash to the same bucket (degenerate collision). In practice this is rare with a good hash function and reasonable load factor.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;When to use:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You need fast existence checks or key-value lookup and don&amp;rsquo;t care about order.&lt;/li&gt;
&lt;li&gt;Keys are primitive types or types for which a hash is readily available.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="defining-a-custom-hash"&gt;Defining a Custom Hash&lt;/h3&gt;
&lt;p&gt;For primitive types (&lt;code&gt;int&lt;/code&gt;, &lt;code&gt;size_t&lt;/code&gt;, etc.) the standard library provides hash specializations automatically. For custom types such as &lt;code&gt;pair&amp;lt;int,int&amp;gt;&lt;/code&gt; or a struct, you must supply your own.&lt;/p&gt;
&lt;p&gt;A simple and effective polynomial hash for 2D or 3D indices:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-cpp"&gt;struct PairHash {
size_t operator()(const pair&amp;lt;int,int&amp;gt;&amp;amp; p) const {
size_t h = 17;
h = h * 53 + hash&amp;lt;int&amp;gt;{}(p.first);
h = h * 53 + hash&amp;lt;int&amp;gt;{}(p.second);
return h;
}
};
unordered_set&amp;lt;pair&amp;lt;int,int&amp;gt;, PairHash&amp;gt; visited;
unordered_map&amp;lt;pair&amp;lt;int,int&amp;gt;, int, PairHash&amp;gt; dist;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For a 3D key, extend the pattern:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-cpp"&gt;struct TripleHash {
size_t operator()(const tuple&amp;lt;int,int,int&amp;gt;&amp;amp; t) const {
size_t h = 17;
h = h * 53 + hash&amp;lt;int&amp;gt;{}(get&amp;lt;0&amp;gt;(t));
h = h * 53 + hash&amp;lt;int&amp;gt;{}(get&amp;lt;1&amp;gt;(t));
h = h * 53 + hash&amp;lt;int&amp;gt;{}(get&amp;lt;2&amp;gt;(t));
return h;
}
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The constant 53 (a prime) spreads bits well and reduces collision clustering.&lt;sup id="fnref:1"&gt;&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref"&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;
&lt;h3 id="an-important-caveat-rehashing"&gt;An Important Caveat: Rehashing&lt;/h3&gt;
&lt;p&gt;When the load factor (number of elements / number of buckets) exceeds a threshold (default 1.0 in most implementations), the hash table &lt;strong&gt;rehashes&lt;/strong&gt;: it allocates a larger bucket array and reinserts every element. This is O(n) and can be surprising if it happens inside a hot loop.&lt;/p&gt;
&lt;p&gt;You can pre-allocate to avoid rehashing:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-cpp"&gt;unordered_map&amp;lt;int, int&amp;gt; freq;
freq.reserve(1024); // pre-allocate buckets
freq.max_load_factor(0.25); // keep table sparse for faster lookup
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h2 id="set-and-map"&gt;&lt;code&gt;set&lt;/code&gt; and &lt;code&gt;map&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;Both are implemented as a &lt;strong&gt;red-black tree&lt;/strong&gt; — a self-balancing binary search tree that guarantees O(log n) for all operations in the worst case, and maintains elements in sorted order.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Complexity:&lt;/strong&gt;&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Operation&lt;/th&gt;
&lt;th&gt;Worst case&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Insert&lt;/td&gt;
&lt;td&gt;O(log n)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Lookup&lt;/td&gt;
&lt;td&gt;O(log n)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Delete&lt;/td&gt;
&lt;td&gt;O(log n)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Min/Max&lt;/td&gt;
&lt;td&gt;O(log n)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;In-order traversal&lt;/td&gt;
&lt;td&gt;O(n)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;&lt;strong&gt;When to use:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You need elements in sorted order (e.g., iterating from smallest to largest).&lt;/li&gt;
&lt;li&gt;You need range queries: &lt;code&gt;lower_bound&lt;/code&gt;, &lt;code&gt;upper_bound&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;You need stable worst-case guarantees (no hash collision spikes).&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="range-queries"&gt;Range Queries&lt;/h3&gt;
&lt;p&gt;The tree structure enables efficient range operations not possible with hash containers:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-cpp"&gt;map&amp;lt;int, string&amp;gt; m;
// ... populate ...
// All entries with keys in [lo, hi]
auto it = m.lower_bound(lo);
while (it != m.end() &amp;amp;&amp;amp; it-&amp;gt;first &amp;lt;= hi) {
cout &amp;lt;&amp;lt; it-&amp;gt;first &amp;lt;&amp;lt; &amp;quot; -&amp;gt; &amp;quot; &amp;lt;&amp;lt; it-&amp;gt;second &amp;lt;&amp;lt; &amp;quot;\n&amp;quot;;
++it;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="performance-pitfall-cache-misses"&gt;Performance Pitfall: Cache Misses&lt;/h3&gt;
&lt;p&gt;Red-black trees store nodes on the heap with pointer chasing. For large sets, this leads to many &lt;strong&gt;cache misses&lt;/strong&gt; compared to a contiguous structure like &lt;code&gt;vector&lt;/code&gt; or a hash table with a flat bucket array. For small n (&amp;lt; a few hundred elements), a sorted &lt;code&gt;vector&lt;/code&gt; with binary search often outperforms &lt;code&gt;set&lt;/code&gt; due to cache locality.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-cpp"&gt;// For small, mostly-read sets, this can be faster:
vector&amp;lt;int&amp;gt; v = {3, 1, 4, 1, 5, 9};
sort(v.begin(), v.end());
v.erase(unique(v.begin(), v.end()), v.end());
bool found = binary_search(v.begin(), v.end(), 4); // O(log n), cache-friendly
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h2 id="vector-the-default-choice"&gt;&lt;code&gt;vector&lt;/code&gt;: The Default Choice&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;vector&lt;/code&gt; is a contiguous dynamic array. It is almost always the right choice unless you have a specific reason to use another container, because:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Cache-friendly&lt;/strong&gt;: elements are laid out sequentially in memory.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Random access in O(1)&lt;/strong&gt;: direct indexing.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Append in amortized O(1)&lt;/strong&gt;: push_back doubles capacity on resize.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Complexity:&lt;/strong&gt;&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Operation&lt;/th&gt;
&lt;th&gt;Complexity&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Random access&lt;/td&gt;
&lt;td&gt;O(1)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;push_back&lt;/td&gt;
&lt;td&gt;O(1) amortized&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Insert/delete at middle&lt;/td&gt;
&lt;td&gt;O(n)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Search (unsorted)&lt;/td&gt;
&lt;td&gt;O(n)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Search (sorted, binary)&lt;/td&gt;
&lt;td&gt;O(log n)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="avoiding-reallocations"&gt;Avoiding Reallocations&lt;/h3&gt;
&lt;p&gt;Like &lt;code&gt;unordered_map&lt;/code&gt;, &lt;code&gt;vector&lt;/code&gt; resizes when it runs out of capacity. Pre-allocate when you know the approximate size:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-cpp"&gt;vector&amp;lt;int&amp;gt; v;
v.reserve(10000); // allocate once, avoid repeated reallocations
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;size()&lt;/code&gt; vs &lt;code&gt;capacity()&lt;/code&gt;: &lt;code&gt;size&lt;/code&gt; is the number of elements currently stored; &lt;code&gt;capacity&lt;/code&gt; is how much memory is allocated. After &lt;code&gt;reserve(n)&lt;/code&gt;, &lt;code&gt;size&lt;/code&gt; is unchanged but &lt;code&gt;capacity &amp;gt;= n&lt;/code&gt;.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="summary-which-container-to-pick"&gt;Summary: Which Container to Pick?&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;Need key-value pairs?
├─ Yes → Need sorted keys or range queries?
│ ├─ Yes → map
│ └─ No → unordered_map (faster)
└─ No → Need uniqueness only?
├─ Yes → Need sorted order?
│ ├─ Yes → set
│ └─ No → unordered_set (faster)
└─ No → vector (default)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When performance matters, always &lt;strong&gt;measure&lt;/strong&gt;. The theoretical complexity advantage of O(1) hash lookup over O(log n) tree lookup may be dwarfed by cache effects, rehashing, or a bad hash function in practice.&lt;/p&gt;
&lt;div class="footnotes" role="doc-endnotes"&gt;
&lt;hr&gt;
&lt;ol&gt;
&lt;li id="fn:1"&gt;
&lt;p&gt;See &lt;a href="https://stackoverflow.com/questions/2634690/good-hash-function-for-a-2d-index/2634715#2634715" target="_blank" rel="noopener"&gt;this StackOverflow discussion&lt;/a&gt; for more on polynomial hashing for multi-dimensional indices.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</description></item><item><title>Understanding C++ Performance (1) : Stack and Heap</title><link>https://zijishi.xyz/post/cpp/understanding-performance-cpp-stack-heap/</link><pubDate>Wed, 20 Feb 2019 00:49:28 +0800</pubDate><guid>https://zijishi.xyz/post/cpp/understanding-performance-cpp-stack-heap/</guid><description>&lt;p&gt;One thing about C++ that attracts me a lot is that it gives me control over the program to a great extent. Therefore, I am starting an article series on why C++ is efficient from the implementation perspective. I have also benchmarked some of the claims and will post them when they are ready.&lt;/p&gt;
&lt;p&gt;I hope this article will be helpful to you.&lt;/p&gt;
&lt;h2 id="stack"&gt;Stack&lt;/h2&gt;
&lt;p&gt;The stack is the memory set aside as scratch space for a thread of execution. When a function is called, a block is reserved on the top of the stack for &lt;em&gt;local variables and some bookkeeping data&lt;/em&gt;. When that function returns, the block becomes unused and can be used the next time a function is called. The stack is always reserved in a LIFO (last in first out) order; the most recently reserved block is always the next block to be freed. This makes it really simple to keep track of the stack; freeing a block from the stack is nothing more than adjusting one pointer, which makes it fast.&lt;/p&gt;
&lt;h2 id="heap"&gt;Heap&lt;/h2&gt;
&lt;p&gt;The heap is memory set aside for &lt;em&gt;dynamic allocation&lt;/em&gt;. For instance, when you do &lt;code&gt;new&lt;/code&gt; or &lt;code&gt;malloc&lt;/code&gt;. Unlike the stack, there&amp;rsquo;s no enforced pattern to the allocation and deallocation of blocks from the heap; you can allocate a block at any time and free it at any time. This makes it much more complex to keep track of which parts of the heap are allocated or free at any given time; there are many custom heap allocators available to tune heap performance for different usage patterns.&lt;/p&gt;
&lt;p&gt;Furthermore, unless &lt;a href="https://en.wikipedia.org/wiki/Resource_acquisition_is_initialization" target="_blank" rel="noopener"&gt;RAII&lt;/a&gt; ideology is adopted, the memory used in heap must be freed manually after use, which may caused overhead. &lt;code&gt;delete&lt;/code&gt; is okay to use as long as it does not involve system call, whereas &lt;code&gt;free&lt;/code&gt; is expensive, since &lt;code&gt;free&lt;/code&gt; itself have several hundreds lines of code.&lt;/p&gt;
&lt;p&gt;Each thread gets a stack, while there&amp;rsquo;s typically only one heap for the application (although it isn&amp;rsquo;t uncommon to have multiple heaps for different types of allocation).&lt;/p&gt;
&lt;h2 id="thread-vs-process"&gt;Thread vs Process&lt;/h2&gt;
&lt;figure id="figure-heap-is-shared-across-threads-of-same-process-while-stack-is-private-to-each-thread"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://zijishi.xyz/img/thread-heap-stack.png" alt="Heap is shared across threads of same process, while stack is private to each thread" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Heap is shared across threads of same process, while stack is private to each thread
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;p&gt;Picture above makes it clearer that : heap is shared within threads of the same process, whereas each thread gets it own stack.&lt;/p&gt;
&lt;p&gt;Operating system(OS) manages the memory for different processes so that they won&amp;rsquo;t mess up with each other. The size of the heap is set when application starts, but can grow as more space is needed (the allocator requests more memory from the OS).&lt;/p&gt;
&lt;p&gt;As for threads, OS estimates the memory needed by a new thread, then allocates the stack for it. The size of the stack is set when a thread is created, and it is determined by compiler, runtime, and some other factors.&lt;/p&gt;
&lt;p&gt;Stack is reclaimed when thread exits. Heap is reclaimed when process exits. Also, in case data leak happens, OS can recovered the leaked memory when program finishes.&lt;/p&gt;
&lt;h2 id="what-makes-one-faster"&gt;What makes one faster?&lt;/h2&gt;
&lt;p&gt;Stack is faster for the following reasons:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;access pattern : it is trivial to allocate and deallocate memory from it (a pointer/integer is simply incremented or decremented), while the heap has much more complex bookkeeping involved in an allocation or deallocation.&lt;/li&gt;
&lt;li&gt;caching effect: each byte in the stack tends to be reused very frequently which means it tends to be mapped to the processor&amp;rsquo;s cache, making it very fast.&lt;/li&gt;
&lt;li&gt;syncronization takes time: heap, being mostly a global resource, typically has to be multi-threading safe, i.e. each allocation and deallocation needs to be - typically - synchronized with &amp;ldquo;all&amp;rdquo; other heap accesses in the program.&lt;/li&gt;
&lt;li&gt;exception handling: &lt;code&gt;malloc&lt;/code&gt; may throw, and exception handling is expensive if it throws.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="edit--placement-new-vs-new"&gt;Edit : Placement New vs New&lt;/h2&gt;
&lt;p&gt;Placement new is a variation of new operator. Different from &lt;code&gt;new&lt;/code&gt;, where memory is allocated on heap at an unknown address, &lt;code&gt;placement new&lt;/code&gt; will construct object at memory address that is already allocated.&lt;/p&gt;
&lt;p&gt;The syntax is as below:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-cpp"&gt; Object *oldObj = new Object();
Object *myObj = new (oldObj) Object(); // myObj will overwrite oldObj
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Normally, &lt;code&gt;new&lt;/code&gt; operator will allocate memory first, then construct new object at that memory address. Placement new will skip the first step, thus, it can be used to optimize the code if memory address is known.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://stackoverflow.com/questions/5857240/why-there-is-no-placement-delete-expression-in-c" target="_blank" rel="noopener"&gt;There is no placement delete&lt;/a&gt;. Compiler knows when to delete it and will handle the deallocation of the memory. Alternatively, programmers can use destructor to delete the object constructed from placement new.&lt;/p&gt;
&lt;h2 id="reference"&gt;Reference&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href="https://barrgroup.com/Embedded-Systems/How-To/Malloc-Free-Dynamic-Memory-Allocation" target="_blank" rel="noopener"&gt;How to Allocate Dynamic Memory Safely&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://stackoverflow.com/questions/79923/what-and-where-are-the-stack-and-heap?noredirect=1&amp;amp;lq=1" target="_blank" rel="noopener"&gt;StackOverflow: What and where are the stack and heap?&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;</description></item></channel></rss>