By default, TensorFlow pre-allocate the whole memory of the GPU card (which can causes CUDA_OUT_OF_MEMORY warning).
To change this, it is possible to
change the percentage of memory pre-allocated, using per_process_gpu_memory_fraction config option,
A value between 0 and 1 that indicates what fraction of the
available GPU memory to pre-allocate for each process. 1 means
to pre-allocate all of the GPU memory, 0.5 means the process
allocates ~50% of the available GPU memory.
disable the pre-allocation, using allow_growth config option. Memory allocation will grow as usage grows.
If true, the allocator does not pre-allocate the entire specified
GPU memory region, instead starting small and growing as needed.
For example:
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction="/?originalUrl=https%3A%2F%2Friptutorial.com%2F0.4sess%2520%3D%2520tf.Session(config%3Dconfig)%2520as%2520sess%3A%253C%2Fcode">
or
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess= tf.Session(config=config):
More information on the config options here.