The problem with writing to /dev/null

If you are like me that take privacy very seriously you most definitely already write logs or other metadata to /dev/null in Linux. For me, I write all sorts of logs to /dev/null whether it's nginx error and access logs or just dummy data.

However, there have always been a problem with writing to /dev/null. The problem is that Linux creates a memory page of the size of 4kb and then writes the data to the null device. This means that there will always be data in memory that will be written to /dev/null. How can we test this?


So let's take a very basic example where we have an infinite loop that writes data to /dev/null and each line has a random value padded. We use the code:
while true; do echo "CAN YOU_READ_THIS_$RANDOM" > /dev/null; done I tested this in Subgraph OS because Debian. Anyway, if we let this while-loop go on and then dump the entire memory and search for the string, we will actually find it, as you can see in the picture below.
But keep in mind that I only found one instance of this random value.

The way this works is by first does the echo command generate the string that will be printed, and $RANDOM will also be generated. This string later echos directly to /dev/null and not stdout.

So the problem here is rather how redirects work in Linux. If you echo something it does first need to be generated in memory.

The fix is not to pipe to /dev/null. Instead there should be an option that no logging should take place at all. Or if you can modify the source code, you can make the output un-readable.