Seminars

View all Seminars  |  Download ICal for this event

Fair and Efficient Dynamic Memory De-Bloating

Series: M.Tech (Research) Colloquium

Speaker: Parth Gangar M.Tech (Research) student, Dept. of CSA

Date/Time: Sep 12 10:00:00

Location: CSA Auditorium, (Room No. 104, Ground Floor)

Faculty Advisor: Prof. Vinod Ganapathy & Prof. K Gopinath

Abstract:
The virtual memory abstraction simplifies programming and enhances portability
but requires the processor to translate virtual addresses to physical addresses
which can be expensive. To speed up the virtual-to-physical address translation,
processors store recently used addresses in Translation Lookaside Buffers (TLBs),
and further use huge (aka large) pages to reduce TLB misses. For example, the
x86 architecture supports 2MB and 1GB huge pages. However, fully harnessing
the performance benefits of huge pages requires robust operating system support.
For example, huge pages are notorious for creating memory bloat ?? a phenomenon
wherein an application is allocated more physical memory than it needs. This leads
to a tradeoff between performance and memory efficiency wherein application performance can be improved at the potential expense of allocating extra physical
memory. Ideally, a system should manage this trade-off dynamically depending on
the availability of physical memory at runtime.
<br>
In this thesis, we highlight two major shortcomings of current OS-based solutions
in dealing with this tradeoff. First, the majority of the existing systems lack
support for dynamic memory de-bloating. This leads to a scenario where either
performance is compromised or memory capacity is wasted permanently. Second,
even when existing systems support dynamic memory de-bloating, their strategies
lead to unnecessary performance slowdown and fairness issues when multiple
applications are running concurrently.
<br>
In this thesis, we address these issues with EMD (Efficient Memory De-bloating).
The key insight in EMD is that different regions in an applications address space
exhibit different amounts of memory bloat. Consequently, the tradeoff between
memory efficiency and performance varies significantly within a given application
e.g., we find that memory bloat is typically concentrated in certain regions of an
application address space, and de-bloating such regions leads to minimal performance impact. Hinged on this insight, EMD employs a prioritization scheme for
fine-grained, efficient, and fair reclamation of memory bloat. We show that doing
this improves performance (by up to 40% over HawkEye - state-of-the-art in OSbased huge page management) and nearly eliminates fairness pathologies that are present in current systems.