When multiplying a value with a billion (for some reason I need to do that)

It would be worth asking yourself if you really "need to do that". And how precise the result needs to be.

You're banging your head against the limits of the IEEE 754 floating point specification. Which, incidentally, is more accurate floating point arithmetic than what was available when NASA put a man on the moon. Just for a point of perspective.

Your "incorrect" result of 156199993344 is off by the enormous «ahem» amount of 0.0004%. If you really higher accuracy, there are other ways of dealing with computer arithmetic to maintain greater precision, but most of them take a fair amount of work; the details depend on your needs. So, just what is that you're doing that requires picometer accuracy?