Despite a certain degree of control over random number generation in various Python packages (see below) TF training on GPU is said not to be 100% reproducible because of uncontrolled operation order over parallel CUDA cores.
Being able to force training to occur on CPU would increase reproducibility of results, albeit at the cost of longer training times; however, that is a choice for the user to make.
def resetRandoms(rSeed = commonSeed):
# 0. set PYTHONHASHSEED per https://stackoverflow.com/questions/58067359/is-there-a-way-to-set-pythonhashseed-for-a-jupyter-notebook-session/61953451#61953451
# This https://stackoverflow.com/questions/54865930/how-to-set-pythonhashseed-for-jupyter-notebook says setting the env from within a notebook is too late...
# but we'll keep that code, just in case
# 1. Set PYTHONHASHSEED environment variable
os.environ['PYTHONHASHSEED'] = str(rSeed) # NB this key does not exist until this is executed
# 2. Set python built-in pseudo-random generator
pyrandom.seed(rSeed)
# 3. Set numpy pseudo-random generator
np.random.seed(rSeed)
# 4. Set the tensorflow pseudo-random generator; see also https://www.tensorflow.org/api_docs/python/tf/random/set_seed
tf.random.set_seed(rSeed)
# for later versions:
# tf.compat.v1.set_random_seed(rSeed)