graph LR
shareMemory-->forksZygoteProcess-->frameworkCode
forksZygoteProcess-->resources
shareMemory-->mmapStaticData-->odex
mmapStaticData-->appResourceTable
mmapStaticData-->so
shareMemory-->sharedMemoryRegions-->ashmem-->|betweenAppAndScreenCompositor|windowSurfaces
ashmem-->|betweenContentProviderAndClient|cursorBuffers
sharedMemoryRegions-->gralloc
In order to fit everything it needs in RAM, Android tries to share RAM pages across processes. It can do so in the following ways:
.odex
file for direct mmapping), app resources (by designing the resource table to be a structure that can be mmapped and by aligning the zip entries of the APK), and traditional project elements like native code in .so
files.Due to the extensive use of shared memory, determining how much memory your app is using requires care. Techniques to properly determine your app’s memory use are discussed in Investigating Your RAM Usage.
The Dalvik heap is constrained to a single virtual memory range for each app process. This defines the logical heap size, which can grow as it needs to but only up to a limit that the system defines for each app.
The logical size of the heap is not the same as the amount of physical memory used by the heap. When inspecting your app’s heap, Android computes a value called the Proportional Set Size (PSS), which accounts for both dirty and clean pages that are shared with other processes—but only in an amount that’s proportional to how many apps share that RAM. This (PSS) total is what the system considers to be your physical memory footprint. For more information about PSS, see the Investigating Your RAM Usage guide.
The Dalvik heap does not compact the logical size of the heap, meaning that Android does not defragment the heap to close up space. Android can only shrink the logical heap size when there is unused space at the end of the heap. However, the system can still reduce physical memory used by the heap. After garbage collection, Dalvik walks the heap and finds unused pages, then returns those pages to the kernel using madvise. So, paired allocations and deallocations of large chunks should result in reclaiming all (or nearly all) the physical memory used. However, reclaiming memory from small allocations can be much less efficient because the page used for a small allocation may still be shared with something else that has not yet been freed.
To maintain a functional multi-tasking environment, Android sets a hard limit on the heap size for each app. The exact heap size limit varies between devices based on how much RAM the device has available overall. If your app has reached the heap capacity and tries to allocate more memory, it can receive an OutOfMemoryError
.
In some cases, you might want to query the system to determine exactly how much heap space you have available on the current device—for example, to determine how much data is safe to keep in a cache. You can query the system for this figure by calling getMemoryClass()
. This method returns an integer indicating the number of megabytes available for your app’s heap.
Android devices contain three different types of memory: RAM, zRAM, and storage. Note that both the CPU and GPU access the same RAM.
Figure 1. Types of memory - RAM, zRAM, and storage
graph LR
subgraph Can be moved/compressed in zRAM by kswapd
zRAMDirty
end
subgraph written back to the file in storage
SharedDirty
end
subgraph can be deleted by kswapd
CachedClean
end
Pages-->Free
Pages-->Used
Used-->|backed by a file on storage|Cached
Cached-->Private-->CachedClean("Clean")
Cached-->Shared-->CachedClean
Private-->zRAMDirty("Dirty")
Used-->|not backed by a file on storage|Anonymous-->zRAMDirty("Dirty")
Shared-->SharedDirty
RAM is broken up into pages. Typically each page is 4KB of memory.
Pages are considered either free or used. Free pages are unused RAM. Used pages are RAM that the system is actively using, and are grouped into the following categories:
kswapd
to increase free memorykswapd
to increase free memorymmap()
with the MAP_ANONYMOUS flag set)
kswapd
to increase free memoryNote: Clean pages contain an exact copy of a file (or portion of a file) that exists in storage. A clean page becomes a dirty page when it no longer contains an exact copy of the file (for example, from the result of an application operation). Clean pages can be deleted because they can always be regenerated using the data from storage; dirty pages cannot be deleted or else data would be lost.
Android has two main mechanisms to deal with low memory situations: the kernel swap daemon and low-memory killer.
The kernel swap daemon (kswapd
) is part of the Linux kernel, and converts used memory into free memory. The daemon becomes active when free memory on the device runs low. The Linux kernel maintains low and high free memory thresholds. When free memory falls below the low threshold, kswapd
starts to reclaim memory. Once the free memory reaches the high threshold, kswapd
stops reclaiming memory.
kswapd
can reclaim clean pages by deleting them because they’re backed by storage and have not been modified. If a process tries to address a clean page that has been deleted, the system copies the page from storage to RAM. This operation is known as demand paging.
Figure 2. Clean page, backed by storage, deleted
kswapd
can move cached private dirty pages and anonymous dirty pages to zRAM, where they are compressed. Doing so frees up available memory in RAM (free pages). If a process tries to touch a dirty page in zRAM, the page is uncompressed and moved back into RAM. If the process associated with a compressed page is killed, then the page is deleted from zRAM.
If the amount of free memory falls below a certain threshold, the system starts killing processes.
Figure 3. Dirty page moved to zRAM and compressed
Many times, kswapd
cannot free enough memory for the system. In this case, the system uses onTrimMemory()
to notify an app that memory is running low and that it should reduce its allocations. If this is not sufficient, the kernel starts killing processes to free up memory. It uses the low-memory killer (LMK) to do this.
To decide which process to kill, LMK uses an “out of memory” score called oom_adj_score
to prioritize the running processes. Processes with a high score are killed first. Background apps are first to be killed, and system processes are last to be killed. The following table lists the LMK scoring categories from high-to-low. Items in the highest-scoring category, in row one, will be killed first:
Figure 4. Android processes, with high scores at the top and low scores at the bottom
The kernel tracks all memory pages in the system.
Figure 5. Pages used by different processes
When determining how much memory is being used by an app, the system must account for shared pages. Apps that access the same service or library will be sharing memory pages. For example, Google Play Services and a game app may be sharing a location service. This makes it difficult to determine how much memory belongs to the service at large versus each application.
Figure 6. Pages shared by two apps (middle)
To determine the memory footprint for an application, any of the following metrics may be used:
PSS is useful for the operating system when it wants to know how much memory is used by all processes since pages don’t get counted multiple times. PSS takes a long time to calculate because the system needs to determine which pages are shared and by how many processes. RSS doesn’t distinguish between shared and non-shared pages (making it faster to calculate) and is better for tracking changes in memory allocation.
https://developer.android.com/topic/performance/memory-management
https://developer.android.com/topic/performance/memory
maxMemory = ((ActivityManager) context.getSystemService(Context.ACTIVITY_SERVICE)).getMemoryClass();
val am: ActivityManager = context.getSystemService(Context.ACTIVITY_SERVICE) as ActivityManager //系统内存信息
val memInfo = ActivityManager.MemoryInfo()
am.getMemoryInfo(memInfo);
Runtime.getRuntime().maxMemory()
Debug.getPss()
reader = new RandomAccessFile("/proc/self/status", "r");