After asking support team how memory usage is calculated, their answer was:
Our memory calculation is as follows: (MemTotal - MemFree - Cache) / MemTotal
This values are taken from /proc/meminfo.
If I do the math, I get the exact number as the graph is showing. But, I don’t think the calculation is accurate.
See some information found: (Linux kernel source tree)
Many load balancing and workload placing programs check /proc/meminfo to
estimate how much free memory is available. They generally do this by
adding up "free" and "cached", which was fine ten years ago, but is
pretty much guaranteed to be wrong today.
It is wrong because Cached includes memory that is not freeable as page
cache, for example shared memory segments, tmpfs, and ramfs, and it does
not include reclaimable slab memory, which can take up a large fraction
of system memory on mostly idle systems with lots of files.
Currently, the amount of memory that is available for a new workload,
without pushing the system into swap, can be estimated from MemFree,
Active(file), Inactive(file), and SReclaimable, as well as the "low"
watermarks from /proc/zoneinfo.
However, this may change in the future, and user space really should not
be expected to know kernel internals to come up with an estimate for the
amount of free memory.
It is more convenient to provide such an estimate in /proc/meminfo. If
things change in the future, we only have to change it in one place.
This is where MemAvailable was born:
An estimate of how much memory is available for starting new applications, without
swapping.
Calculated from MemFree, SReclaimable, the size of the file LRU lists, and the
low watermarks in each zone.
The estimate takes into account that the system needs some page cache to function well, and that not all reclaimable slab will be reclaimable, due to items being in use.
The impact of those factors will vary from system to system.
Wouldn't be better to calculate memory usage as:
(MemTotal - MemAvailable) / MemTotal