DigitalOcean home
  • Droplets
  • Spaces
  • Kubernetes
  • Tools & Integrations
  • One-click Apps
  • API Documentation
  • Community
  • Tutorials
  • Q&A
  • Projects
  • Meetups
  • Customers
  • Pricing
  • Docs
  • Support
  • DigitalOcean home
  • Products
    • Droplets

      Scalable compute services.

    • Spaces

      Simple object storage.

    • Kubernetes

      Run managed Kubernetes clusters.

    • Tools & Integrations

      Automate your infrastructure.

    • One-click Apps

      Deploy pre-built applications.

    • API Documentation
  • Customers
  • Community
    • Community Overview

      Connect, share and learn

    • Tutorials

      DevOps and development guides

    • Questions & Answers

      Development and systems Q&A

    • Projects

      Community-built integrations

    Get Involved
    Write for DOnations
    Join us at a Meetup
    Featured Post
    An Introduction to Kubernetes

    by Justin Ellingwood

  • Pricing
  • Docs
  • Support
    • Documentation

    • Contact Support

    • Network Status

  • Home /
  • CPX-I-17 /
  • New idea
10 Vote

Fix memory usage graph value calculation using MemAvailable

After asking support team how memory usage is calculated, their answer was:

Our memory calculation is as follows: (MemTotal - MemFree - Cache) / MemTotal 

This values are taken from /proc/meminfo.

 

If I do the math, I get the exact number as the graph is showing. But, I don’t think the calculation is accurate.

See some information found: (Linux kernel source tree)

 

Many load balancing and workload placing programs check /proc/meminfo to
estimate how much free memory is available.  They generally do this by
adding up "free" and "cached", which was fine ten years ago, but is
pretty much guaranteed to be wrong today.

It is wrong because Cached includes memory that is not freeable as page
cache, for example shared memory segments, tmpfs, and ramfs, and it does
not include reclaimable slab memory, which can take up a large fraction
of system memory on mostly idle systems with lots of files.

Currently, the amount of memory that is available for a new workload,
without pushing the system into swap, can be estimated from MemFree,
Active(file), Inactive(file), and SReclaimable, as well as the "low"
watermarks from /proc/zoneinfo.

However, this may change in the future, and user space really should not
be expected to know kernel internals to come up with an estimate for the
amount of free memory.

It is more convenient to provide such an estimate in /proc/meminfo.  If
things change in the future, we only have to change it in one place.

 

This is where MemAvailable was born:

An estimate of how much memory is available for starting new applications, without 
swapping. 
Calculated from MemFree, SReclaimable, the size of the file LRU lists, and the 
low watermarks in each zone.

The estimate takes into account that the system needs some page cache to function well, and that not all reclaimable slab will be reclaimable, due to items being in use. 
The impact of those factors will vary from system to system.

 

Wouldn't be better to calculate memory usage as:

(MemTotal - MemAvailable) / MemTotal 

  • Daniel Pereyra
  • Feb 12 2020
  • Needs review
Control Panel
  • Comments (2)
  • Votes (10)
  • Attach files
  • Guest commented
    4 Mar 06:25am

    Issue still persists, and it is confusing too, cause based on the current graph, one could have the idea, that their app is leaking memory, whereas it might very well be the case, that that memory is used by linux as cache, and can be reclaimed by applications, if they request it. At least, as far as my understanding on this topic goes.

    ×

    Attachments Open full size

  • Daniel Pereyra commented
    12 Feb, 2020 11:29am

    You can see also: https://www.digitalocean.com/community/questions/digitalocean-is-calculating-wrong-the-memory-graph-in-monitoring?answer=58621

     

    This is a community question talking about the same issue.

    ×

    Attachments Open full size

Log in / Sign up

Identify yourself with your email address

Subscribe

You won't be notified about changes to this idea.

Related ideas

DigitalOcean home

© 2018 DigitalOcean, LLC. All rights reserved.
Proudly made in NY

  • Twitter
  • Facebook
  • Instagram
  • YouTube
  • LinkedIn
  • Glassdoor
Company
About
Leadership
Blog
Careers
Partner Network
Referral Program
Events
Press
Legal & Security
Products
Droplets
Spaces
Kubernetes
Tools & Integrations
One-click Apps
API
Pricing
Documentation
Release Notes
Community
Tutorials
Meetups
Q&A
Write for DOnations
Droplets for Demos
Hatch
Shop Swag
Research Program
Currents Research
Open Source
Support
Contact Support
FAQ
Network Status