opm-simulators
Loading...
Searching...
No Matches
Opm::gpuistl::GPUObliviousMPISender< field_type, block_size, OwnerOverlapCopyCommunicationType > Class Template Reference

Derived class of GPUSender that handles MPI calls that should NOT use GPU direct communicatoin The implementation moves data fromthe GPU to the CPU and then sends it using regular MPI. More...

#include <GpuObliviousMPISender.hpp>

Inheritance diagram for Opm::gpuistl::GPUObliviousMPISender< field_type, block_size, OwnerOverlapCopyCommunicationType >:
Opm::gpuistl::GPUSender< field_type, OwnerOverlapCopyCommunicationType >

Public Types

using X = GpuVector<field_type>
Public Types inherited from Opm::gpuistl::GPUSender< field_type, OwnerOverlapCopyCommunicationType >
using X = GpuVector<field_type>

Public Member Functions

 GPUObliviousMPISender (const OwnerOverlapCopyCommunicationType &cpuOwnerOverlapCopy)
void copyOwnerToAll (const X &source, X &dest) const override
 copyOwnerToAll will copy the data in source to all processes.
Public Member Functions inherited from Opm::gpuistl::GPUSender< field_type, OwnerOverlapCopyCommunicationType >
 GPUSender (const OwnerOverlapCopyCommunicationType &cpuOwnerOverlapCopy)
void project (X &x) const
 project will project x to the owned subspace
void dot (const X &x, const X &y, field_type &output) const
 dot will carry out the dot product between x and y on the owned indices, then sum up the result across MPI processes.
field_type norm (const X &x) const
 norm computes the l^2-norm of x across processes.
const ::Dune::Communication< MPI_Comm > & communicator () const
 communicator returns the MPI communicator used by this GPUSender

Additional Inherited Members

Protected Attributes inherited from Opm::gpuistl::GPUSender< field_type, OwnerOverlapCopyCommunicationType >
std::once_flag m_initializedIndices
std::unique_ptr< GpuVector< int > > m_indicesOwner
std::unique_ptr< GpuVector< int > > m_indicesCopy
const OwnerOverlapCopyCommunicationType & m_cpuOwnerOverlapCopy

Detailed Description

template<class field_type, int block_size, class OwnerOverlapCopyCommunicationType>
class Opm::gpuistl::GPUObliviousMPISender< field_type, block_size, OwnerOverlapCopyCommunicationType >

Derived class of GPUSender that handles MPI calls that should NOT use GPU direct communicatoin The implementation moves data fromthe GPU to the CPU and then sends it using regular MPI.

Template Parameters
field_typeis float or double
block_sizeis the blocksize of the blockelements in the matrix
OwnerOverlapCopyCommunicationTypeis typically a Dune::LinearOperator::communication_type

Member Function Documentation

◆ copyOwnerToAll()

template<class field_type, int block_size, class OwnerOverlapCopyCommunicationType>
void Opm::gpuistl::GPUObliviousMPISender< field_type, block_size, OwnerOverlapCopyCommunicationType >::copyOwnerToAll ( const X & source,
X & dest ) const
inlineoverridevirtual

copyOwnerToAll will copy the data in source to all processes.

Note
Depending on the implementation, this may or may not use GPU aware MPI. If it does not use GPU aware MPI, the data will be copied to the CPU before the communication.
Parameters
[in]source
[out]dest

Implements Opm::gpuistl::GPUSender< field_type, OwnerOverlapCopyCommunicationType >.


The documentation for this class was generated from the following file: