|
回复 #7 cyrosly 的帖子
引用:
cudaMemcpyAsync( indp, hp, size, cudaMemcpyHostToDevice, stream[ 0 ] );
for( unsigned int sid=0; sid<n_streams; ++sid )
{
kernel<<< grid_layout, block_layout, stream[ sid ] >>>( outdp+i*size, indp+i*size, ... );
if( sid<n_streams-1 ){
cudaMemcpyAsync( indp+( i+1 )*size, hp+( i+1 )*size, size, cudaMemcpyHostToDevice, stream[ sid+1 ] );
}
cudaMemcpyAsync( hp+i*size, outdp+i*size, size, cudaMemcpyDeviceToHost, stream[ sid ] );
}
cudaMemcpyAsync( hp+( n_streams-1 )*size, dp+( n_streams-1 )*size, size, cudaMemcpyDeviceToHost, stream[ n_streams-1 ] );
请问,这样是不是因为,在store调用之前,有一个隐式的store fence,如果全部放在一个for中,就成了顺序的了。如果像您这样写,是不是就在计算这一流的数据时,进行下一流的数据的加载的,然后才是store fence,这样能实现两个流的并行?要是全部的分开,就可以使数据的加载的for循环中,只要一个流的数据加载完毕,下一个流就可以开始进行,从而实现多个流的并行呢?
在http://www.drdobbs.com/architecture-and-design/217500110介绍WC主机端内存中,有“It is unclear if and when a CUDA programmer needs to take any action (such as using a memory fence) to ensure that the WC memory is in-place and ready for use by the host or graphics processor(s). The Intel documentation states that "[a] 'memory fence' instruction should be used to properly ensure consistency between the data producer and data consumer." The CUDA driver does use WC memory internally and must issue a store fence instruction whenever it sends a command to the GPU. For this reason, the NVIDIA documentation notes, "the application may not have to use store fences at all" (emphasis added). A rough rule of thumb that appears to work is to look to the CUDA commands prior to referencing WC memory and assume they issue a fence instruction. Otherwise, utilize your compiler intrinsic operations to issue a store fence instruction and guarantee that every preceding store is globally visible. This is compiler dependent. Linux compilers will probably understand the _mm_sfence intrinsic while Windows compilers will probably use _WriteBarrier. ”,我的理解是:如果用WC的内存,CUDA驱动会自动在异步设备向主机写数据的函数之前加入store fence,您能看看我的这种理解正确吗?只有WC方式才会这样吗?其它的时候可以全部放在一个for循环中,而不是您提供的那种方法吗?另外,流的个数一般怎样设定呢?我不知道为什么,无论要处理的数据量是多少,用4个流都没有用两个流的效果好,是不是更多的流只会增加CPU调用kernel函数、进行数据加载和指令加载的频率,而掩盖了异步所带来的性能提升了呢?我又担心我的流使用错了,所以没有显著的性能优化。。。
[ 本帖最后由 bzz168 于 2010-5-21 03:40 编辑 ] |
|