In fact, language comparison is a ungrateful thing. There will always be tests in which one language wins compared to another, and there will always be people who believe that the test is not releasable and that such a code will never meet in reality.Nevertheless, I would not say that the results of the TC are very unexpected: in the NET, memory is actually more rapid than in the inactive languages without a caste allocator. And small tests usually put a lot more on the allocator than, say, the trigger mechanism.The reason for this difference in the productivity of the allocator is that C++ facilities cannot be remembranced, which means the usual memory algorithm (which is known to support the list of free blocks and is looking for a suitable at allocation) is slow and, worse, requires a global memory block (which further exacerbates the situation in a multi-pronged scenario). In addition, objects in C++ tend to be released quickly as soon as possible, resulting in additional workloads Liberation a memory that also requires a global blockdown.It's different in environment. The objects are always on top of the heap-pamite, which means the allocation isn't slower than that. InterlockedIncrement♪ NET's not gonna need to maintain a list of free blocks because heap-mouths are compacted in the collection of debris: the objects are being relocated by filling out " dips " .In addition, to mention that the C++ code might well be slower than the C# code, it's not news. That's great, for example. https://docs.microsoft.com/en-us/archive/blogs/ricom/performance-quiz-6-chineseenglish-dictionary-reader A simple annex from anti-programming masters, and https://blog.codinghorror.com/on-managed-code-performance-again/ Jeff Et:In order to circumvent the C# version of productivity, Raymond had to write his own entrytake procedures, rewrite the class stringuse the caste allocator as well as its own code page display procedure.This is also confirmed by the baccalaureate, which is as follows: the " box " anti-containers are significantly losing Dotnetian (some) self-written containers.Now the most interesting thing is the measurements.C#:using System;
using System.Collections.Generic;
using System.Diagnostics;
namespace Sharp
{
class Program
{
static void Main(string[] args)
{
var dict = new Dictionary<int, int>();
int seed = 1;
var timer = new Stopwatch();
timer.Start();
for (int i = 0; i < 10000000; i++)
{
seed = 1664525 * seed + 1013904223;
dict.Add(seed, i);
}
timer.Stop();
Console.WriteLine(
"elapsed time = {0} ms, dictionary entries count = {1}",
timer.ElapsedMilliseconds,
dict.Count);
}
}
}
C+++:#include "stdafx.h"
#include <ctime>
#include <map>
#include <iostream>
using namespace std;
int main(int argc, char* argv[])
{
map<int, int> dict;
int seed = 1;
auto begin = clock();
for (int i = 0; i < 10000000; i++)
{
seed = 1664525 * seed + 1013904223;
dict.insert(make_pair(seed, i));
}
auto end = clock();
double elapsedMs = double(end - begin) * 1000.0 / CLOCKS_PER_SEC;
cout << "elapsed time = " << elapsedMs
<< " ms, dictionary entries count = " << dict.size()
<< endl;
return 0;
}
Measurement results (release mode, 5 outlets in a row without a retractor):C#elapsed time = 1138 ms, dictionary entries count = 10000000elapsed time = 1127 ms, dictionary entries count = 10000000elapsed time = 1133 ms, dictionary entries count = 10000000elapsed time = 1,134 ms, dictionary entries count = 10000000elapsed time = 1129 ms, dictionary entries count = 10000000C++elapsed time = 8377 ms, dictionary entries count = 10000000elapsed time = 8408 ms, dictionary entries count = 10000000elapsed time = 8377 ms, dictionary entries count = 10000000elapsed time = 8377 ms, dictionary entries count = 10000000elapsed time = 8361 ms, dictionary entries count = 10000000Average time: C# = 1,132 ms, C++ = 8,379 ms.I'm not saying my tests are perfect. Besides, they're only relevant on my computer. If someone offers a better method of measurement, I'd love to use it, too. However, in my environment, productivity System.Collections.Generic.Dictionary adding elements substantially exceeds productivity std::map♪Please note that Dictionary Uses Hash Tables while std::map In my implementation, a red-black tree is used as a loaded data structure. Hash-Tables are usually faster on their own, so the velocity of allocation is not the only reason for a better speed. Dictionary♪Replacement in C+++ make_pair(seed, i) ♪ pair<int, int>(seed, i) @igumnov did not result in a significant difference: 8361/8392/8361/8408/8361/8345.Replacement in C+++ std::map ♪ std::unordered_map On the basis of the @Cotics ' advice, there has been a significant acceleration: 2230/2230/2230/2246 (average 2233). However, C+++ is still almost twice as slow.Replaced with C++ std::unordered_map at uthash on the advice of @igumnov. The result is a little worse than std::unordered_map2963/2932/2948/2948/2948/2932. Code:void testUhash()
{
struct myint
{
int key;
int value;
UT_hash_handle hh;
};
struct myint* dict = NULL;
int seed = 1;
auto begin = clock();
for (int i = 0; i < 10000000; i++)
{
seed = 1664525 * seed + 1013904223;
struct myint* ps = (struct myint*)malloc(sizeof(struct myint));
ps->key = seed;
ps->value = i;
HASH_ADD_INT(dict, key, ps);
}
auto end = clock();
double elapsedMs = double(end - begin) * 1000.0 / CLOCKS_PER_SEC;
cout << "elapsed time = " << elapsedMs
<< " ms, dictionary entries count = " << HASH_COUNT(dict)
<< endl;
}
Added. capacity = 10000000 C++ and for fair comparison in C#, too. Changes:C+++:unordered_map<int, int> dict(10000000);
C#:var dict = new Dictionary<int, int>(10000000);
Indeed, it was more urgent:C++: 1826/1856/1857/1841/1825, medium 1841C#: 790/786/801/790/791, average 792The C# is still more than double ahead.@KoVadim withdrew seed (capacity left), now the working cycle is:C+++:for (int i = 0; i < 10000000; i++)
{
//seed = 1664525 * seed + 1013904223;
dict.insert(pair<int, int>(i, i));
}
C#:for (int i = 0; i < 10000000; i++)
{
//seed = 1664525 * seed + 1013904223;
dict.Add(i, i);
}
Results:C++: 1498/1514/1498/1498, medium 1501C#: 129/129/135/133/132, average 132@igumnov added khash. Code:KHASH_MAP_INIT_INT(32, int)
void testKhash()
{
int seed = 1;
khiter_t iter;
khash_t(32)* dict = kh_init(32);
int dummy;
auto begin = clock();
for (int i = 0; i < 10000000; i++)
{
seed = 1664525 * seed + 1013904223;
iter = kh_put(32, dict, seed, &dummy);
kh_value(dict, iter) = i;
}
auto end = clock();
double elapsedMs = double(end - begin) * 1000.0 / CLOCKS_PER_SEC;
cout << "elapsed time = " << elapsedMs
<< " ms, dictionary entries count = " << kh_size(dict)
<< endl;
}
Result: 577/577/608/577/577, average 583, massive wine for the anticode. I'll remind you that the best result of the standard NET Container is 792 ms.Who's gonna propose a caste container under the NET?I tried implementation for NET. https://code.google.com/archive/p/mapreduce-net/source (project) https://web.archive.org/web/20160121172200/https://code.google.com/p/mapreduce-net/ ) It was a little slower than built. Dictionary853/865/842/841/842, medium 849.Tested net allocation speed to check the hypothesis @Dith: 10 million empty-grade designer is launched. Code:C#:static class Allocation
{
class Foo
{
}
static public void Test()
{
const int size = 10000000;
var timer = new Stopwatch();
timer.Start();
for (int i = 0; i < size; i++)
{
new Foo();
}
timer.Stop();
Console.WriteLine("elapsed time = {0} ms", timer.ElapsedMilliseconds);
}
}
C+++:void testAlloc()
{
const int size = 10000000;
LARGE_INTEGER li;
if (!QueryPerformanceFrequency(&li))
exit(1);
double freq = double(li.QuadPart)/1000.0;
QueryPerformanceCounter(&li);
auto begin = li.QuadPart;
for (int i = 0; i < size; i++)
new Foo();
QueryPerformanceCounter(&li);
auto end = li.QuadPart;
double elapsedMs = double(end - begin) / freq;
cout << "elapsed time = " << elapsedMs
<< " ms" << endl;
}
Results:C#: 58/54/28/55/55 (average 50)C+++: 407.719/400.693/401.674/401.926/399.976 (average 402.4)