经常看到说function的开销比较大,慎用function之类的讨论。
那function究竟哪里开销大,我找到了一篇为function做profile的文章,这篇文章中的英文比较简单,我就不翻译了,英文吃力的朋友也可以直接看下面的数据:
Popular folklore demands that you avoid std::function if you care about performance.
But is it really true? How bad is it?
Nanobenchmarking std::function
Benchmarking is hard. Microbenchmarking is a dark art. Many people insist that nanobenchmarking is out of the reach for us mortals.
But that won’t stop us: let’s benchmark the overhead of creating and calling a std::function.
We have to tread extra carefully here. Modern desktop CPUs are insanely complex, often with deep pipelines, out-of-order execution, sophisticated branch prediction, prefetching, multiple level of caches, hyperthreading, and many more arcane performance-enhancing features.
The other enemy is the compiler.
Any sufficiently advanced optimizing compiler is indistinguishable from magic.
We’ll have to make sure that our code-to-be-benchmarked is not being optimized away. Luckily, volatile is still not fully deprecated and can be (ab)used to prevent many optimizations. In this post we will only measure throughput (how long does it take to call the same function 1000000 times?). We’re going to use the following scaffold:
template<class F>
void benchmark(F&& f, float a_in = 0.0f, float b_in = 0.0f)
{
auto constexpr count = 1'000'000;
volatile float a = a_in;
volatile float b = b_in;
volatile float r;
auto const t_start = std::chrono::high_resolution_clock::now();
for (auto i = 0; i < count; ++i)
r = f(a, b);
auto const t_end = std::chrono::high_resolution_clock::now();
auto const dt = std::chrono::duration<double>(t_end - t_start).count();
std::cout << dt / count * 1e9 << " ns / op" << std::endl;
}
Double checking with godbolt we can verify that the compiler is not optimizing the function body even though we only compute 0.0f + 0.0f in a loop. The loop itself has some overhead and sometimes the compiler will unroll parts of the loop.
Our test system in the following benchmarks is an Intel Core i9-9900K running at 4.8 GHz (a modern high-end consumer CPU at the time of writing). The code is compiled with clang-7 and the libcstd++ standard library using -O2 and -march=native.
We start with a few basic tests:
benchmark([](float, float) { return 0.0f; }); // 0.21 ns / op (1 cycle / op)
benchmark([](float a, float b) { return a + b; }); // 0.22 ns / op (1 cycle / op)
benchmark([](float a, float b) { return a / b; }); // 0.62 ns / op (3 cycles / op)
The baseline is about 1 cycle per operation and the a / b test verifies that we can reproduce the throughput of basic operations (a good reference is AsmGrid, X86 Perf on the upper right). (I’ve repeated all benchmarks multiple times and chose the mode of the distribution.)
The first thing we want to know: How expensive is a function call?
using fun_t = float(float, float);
// inlineable direct call
float funA(float a, float b) { return a + b; }
// non-inlined direct call
__attribute__((noinline)) float funB(float a, float b) { return a + b; }
// non-inlined indirect call
fun_t* funC; // set externally to funA
// visible lambda
auto funD = [](float a, float b) { return a + b; };
// std::function with visible function
auto funE = std::function<fun_t>(funA);
// std::function with non-inlined function
auto funF = std::function<fun_t>(funB);
// std::function with function pointer
auto funG = std::function<fun_t>(funC);
// std::function with visible lambda
auto funH = std::function<fun_t>(funD);
// std::function with direct lambda
auto funI = std::function<fun_t>([](float a, float b) { return a + b; });
The results:
benchmark(funA); // 0.22 ns / op (1 cycle / op)
benchmark(funB); // 1.04 ns / op (5 cycles / op)
benchmark(funC); // 1.04 ns / op (5 cycles / op)
benchmark(funD); // 0.22 ns / op (1 cycle / op)
benchmark(funE); // 1.67 ns / op (8 cycles / op)
benchmark(funF); // 1.67 ns / op (8 cycles / op)
benchmark(funG); // 1.67 ns / op (8 cycles / op)
benchmark(funH); // 1.25 ns / op (6 cycles / op)
benchmark(funI); // 1.25 ns / op (6 cycles / op)
This suggests that only A and D are inlined and that there is some additional optimization possible when using std::function with a lambda.
We can also measure how long it takes to construct or copy a std::function:
std::function<float(float, float)> f;
benchmark([&]{ f = {}; }); // 0.42 ns / op ( 2 cycles / op)
benchmark([&]{ f = funA; }); // 4.37 ns / op (21 cycles / op)
benchmark([&]{ f = funB; }); // 4.37 ns / op (21 cycles / op)
benchmark([&]{ f = funC; }); // 4.37 ns / op (21 cycles / op)
benchmark([&]{ f = funD; }); // 1.46 ns / op ( 7 cycles / op)
benchmark([&]{ f = funE; }); // 5.00 ns / op (24 cycles / op)
benchmark([&]{ f = funF; }); // 5.00 ns / op (24 cycles / op)
benchmark([&]{ f = funG; }); // 5.00 ns / op (24 cycles / op)
benchmark([&]{ f = funH; }); // 4.37 ns / op (21 cycles / op)
benchmark([&]{ f = funI; }); // 4.37 ns / op (21 cycles / op)
The result of f = funD suggests that constructing a std::function directly from a lambda is pretty fast. Let’s check that when using different capture sizes:
struct b4 { int32_t x; };
struct b8 { int64_t x; };
struct b16 { int64_t x, y; };
benchmark([&]{ f = [](float, float) { return 0; }; }); // 1.46 ns / op ( 7 cycles / op)
benchmark([&]{ f = [x = b4{}](float, float) { return 0; }; }); // 4.37 ns / op (21 cycles / op)
benchmark([&]{ f = [x = b8{}](float, float) { return 0; }; }); // 4.37 ns / op (21 cycles / op)
benchmark([&]{ f = [x = b16{}](float, float) { return 0; }; }); // 1.66 ns / op ( 8 cycles / op)
I didn’t have the patience to untangle the assembly or the libcstd++ implementation to check where this behavior originates. You obviously have to pay for the capture and I think what we see here is a strange interaction between some kind of small function optimization and the compiler hoisting the construction of b16{} out of our measurement loop.
I think there is a lot of fearmongering regarding std::function, not all of it is justified.
My benchmarks suggest that on a modern microarchitecture the following overhead can be expected on hot data and instruction caches:
calling a non-inlined function | 4 cycles |
---|---|
calling a function pointer | 4 cycles |
calling a std::function of a lambda |
5 cycles |
calling a std::function of a function or function pointer |
7 cycles |
constructing an empty std::function |
7 cycles |
constructing a std::function from a function or function pointer |
21 cycles |
copying a std::function |
21..24 cycles |
constructing a std::function from a non-capturing lambda |
7 cycles |
constructing a std::function from a capturing lambda |
21+ cycles |
A word of caution: the benchmarks really only represent the overhead relative to a + b. Different functions show slightly different overhead behavior as they might use different scheduler ports and execution units that might overlap differently with what the loop requires. Also, a lot of this depends on how willing the compiler is to inline.
We’ve only measured the throughput. The results are only valid for “calling the same function many times with different arguments”, not for “calling many different functions”. But that is a topic for another post.
本文由哈喽比特于2年以前收录,如有侵权请联系我们。
文章来源:https://mp.weixin.qq.com/s/wU31yx3b5d-ncyq02-inSw
京东创始人刘强东和其妻子章泽天最近成为了互联网舆论关注的焦点。有关他们“移民美国”和在美国购买豪宅的传言在互联网上广泛传播。然而,京东官方通过微博发言人发布的消息澄清了这些传言,称这些言论纯属虚假信息和蓄意捏造。
日前,据博主“@超能数码君老周”爆料,国内三大运营商中国移动、中国电信和中国联通预计将集体采购百万台规模的华为Mate60系列手机。
据报道,荷兰半导体设备公司ASML正看到美国对华遏制政策的负面影响。阿斯麦(ASML)CEO彼得·温宁克在一档电视节目中分享了他对中国大陆问题以及该公司面临的出口管制和保护主义的看法。彼得曾在多个场合表达了他对出口管制以及中荷经济关系的担忧。
今年早些时候,抖音悄然上线了一款名为“青桃”的 App,Slogan 为“看见你的热爱”,根据应用介绍可知,“青桃”是一个属于年轻人的兴趣知识视频平台,由抖音官方出品的中长视频关联版本,整体风格有些类似B站。
日前,威马汽车首席数据官梅松林转发了一份“世界各国地区拥车率排行榜”,同时,他发文表示:中国汽车普及率低于非洲国家尼日利亚,每百户家庭仅17户有车。意大利世界排名第一,每十户中九户有车。
近日,一项新的研究发现,维生素 C 和 E 等抗氧化剂会激活一种机制,刺激癌症肿瘤中新血管的生长,帮助它们生长和扩散。
据媒体援引消息人士报道,苹果公司正在测试使用3D打印技术来生产其智能手表的钢质底盘。消息传出后,3D系统一度大涨超10%,不过截至周三收盘,该股涨幅回落至2%以内。
9月2日,坐拥千万粉丝的网红主播“秀才”账号被封禁,在社交媒体平台上引发热议。平台相关负责人表示,“秀才”账号违反平台相关规定,已封禁。据知情人士透露,秀才近期被举报存在违法行为,这可能是他被封禁的部分原因。据悉,“秀才”年龄39岁,是安徽省亳州市蒙城县人,抖音网红,粉丝数量超1200万。他曾被称为“中老年...
9月3日消息,亚马逊的一些股东,包括持有该公司股票的一家养老基金,日前对亚马逊、其创始人贝索斯和其董事会提起诉讼,指控他们在为 Project Kuiper 卫星星座项目购买发射服务时“违反了信义义务”。
据消息,为推广自家应用,苹果现推出了一个名为“Apps by Apple”的网站,展示了苹果为旗下产品(如 iPhone、iPad、Apple Watch、Mac 和 Apple TV)开发的各种应用程序。
特斯拉本周在美国大幅下调Model S和X售价,引发了该公司一些最坚定支持者的不满。知名特斯拉多头、未来基金(Future Fund)管理合伙人加里·布莱克发帖称,降价是一种“短期麻醉剂”,会让潜在客户等待进一步降价。
据外媒9月2日报道,荷兰半导体设备制造商阿斯麦称,尽管荷兰政府颁布的半导体设备出口管制新规9月正式生效,但该公司已获得在2023年底以前向中国运送受限制芯片制造机器的许可。
近日,根据美国证券交易委员会的文件显示,苹果卫星服务提供商 Globalstar 近期向马斯克旗下的 SpaceX 支付 6400 万美元(约 4.65 亿元人民币)。用于在 2023-2025 年期间,发射卫星,进一步扩展苹果 iPhone 系列的 SOS 卫星服务。
据报道,马斯克旗下社交平台𝕏(推特)日前调整了隐私政策,允许 𝕏 使用用户发布的信息来训练其人工智能(AI)模型。新的隐私政策将于 9 月 29 日生效。新政策规定,𝕏可能会使用所收集到的平台信息和公开可用的信息,来帮助训练 𝕏 的机器学习或人工智能模型。
9月2日,荣耀CEO赵明在采访中谈及华为手机回归时表示,替老同事们高兴,觉得手机行业,由于华为的回归,让竞争充满了更多的可能性和更多的魅力,对行业来说也是件好事。
《自然》30日发表的一篇论文报道了一个名为Swift的人工智能(AI)系统,该系统驾驶无人机的能力可在真实世界中一对一冠军赛里战胜人类对手。
近日,非营利组织纽约真菌学会(NYMS)发出警告,表示亚马逊为代表的电商平台上,充斥着各种AI生成的蘑菇觅食科普书籍,其中存在诸多错误。
社交媒体平台𝕏(原推特)新隐私政策提到:“在您同意的情况下,我们可能出于安全、安保和身份识别目的收集和使用您的生物识别信息。”
2023年德国柏林消费电子展上,各大企业都带来了最新的理念和产品,而高端化、本土化的中国产品正在不断吸引欧洲等国际市场的目光。
罗永浩日前在直播中吐槽苹果即将推出的 iPhone 新品,具体内容为:“以我对我‘子公司’的了解,我认为 iPhone 15 跟 iPhone 14 不会有什么区别的,除了序(列)号变了,这个‘不要脸’的东西,这个‘臭厨子’。