 Proste. vseče, ne v fizikalji slab, ne za vseče auto memori, bo za vseče vseče aplikacije, ne bo vseče, da je bolj vseče, in karnega ne zelo na to, tako da sem vseče, da je vseče vseče vseče, da je vseče vseče vseče vseče. To je vseče vseče vseče vseče vseče, bo se prihodi, da imamo automemori v nekaj spasih, in da imamo nekaj nekaj bilo data, v ročnih procesov, na naših platformah. V seženu vočaj sem zelo početnja, da početnjemi se v nekaj spasih, in vse spasje, ker je bolj vse, in prišlično plafon sestim je bolj da da je dobro vrst, danas karne, in zelo zame kako je vse počkake. Mimoče, je kreselna vstupna, vse delavno, zelo izgledan, mače je to vse delavno, But it's quite a new feature, so I will describe it alone. There is per process and per C-group reclaim functionality, transparent memory compression, custom auto memory handling, memory volatile ranges, and kernel sample merging feature. Some of these solutions are kernel only features like transparent memory compression. Some are kernel features, which expose some APIs to user space and user space applications can influence memory management like memory volatile ranges or kernel sample merging. And some are user space only solutions like user space om handling. OK, let's go to these solutions. So, control groups, memory controller. Who of you have heard about control groups? Please raise your hand. OK, so I will describe it quickly. And I think that yesterday there was a session about control groups, in server environments. So, it's not only useful in server environments, it's also useful in mobile systems. So, control groups feature provides us a mechanism for aggregating, partitioning, set of tasks, and all the future children into hierarchical groups with specialist behavior. What does it mean, actually? We have a control group, which is basically a set of tasks. We have subsystem, which is usually a resource controller, and it is a module that makes it possible to work, to make some operations on these control groups. And we have control groups hierarchy, which is a set of control groups arranged in the tree, in such way that every task in a system is assigned to one particular C group. And control groups can be controlled using control groups virtual file system, and it's quite easy to do. So, we have different controllers for control groups, and one of these controllers is a C group memory controller called MemCG, and it isolates the memory behavior of a group of tasks from the rest of the system. And let's see what features it has. So, it can account and limit usage of anonymous pages and file cases. Let me explain quickly what are anonymous pages, because maybe not all people are so advanced in memory management. So, anonymous pages are pages which are not backed by files. So, they are also called pre-vite pages of a process. And control groups memory controller does account and limiting of both types of pages. We can have optionally account and limiting of swap and kernel memory pages. And currently, kernel memory usage accounting includes stack pages, flap pages, socket memory pressure, and TCP memory pressure. And it's still working progress so maybe some more kernel memory usage will be accounted. Another feature of control groups memory controller is that it does hierarchical accounting and reclaim. So, it operates on this C group hierarchy described before. And it can move charge at task migration from one C group to other C group. So, if we have some set of C groups that we have set some limit for, we can move from task from one C group and our memory usage counter will decrease in one C group and increase in the other one. So, we can have multiple C groups and multiple limits. And we have also soft limits because normal usage memory limits are called hard limits set of tasks exhoused their limit. New processes will be killed because there will be not enough memory. But we have something like soft limits which are usually set below hard limits. And in case of memory pressures or running low on memory, system will try to reclaim as much as possible from this C group to decrease the memory usage to be closer to the soft limit. And what is reclaimed is just trying to free some caches or trying to put pages to swap. Another important feature of MMCG is usage threshold notifier. So, it's possible to register multiple memory thresholds and get notifications in the user space when they are crossed. We have also this new feature, memory pressure notifier which is similar, but instead of getting notifications on crossing the limits we get notifications and the memory pressure is increasing. I will describe it more in the next slide. And another important feature of MMCG is the possibility to disable in kernel out-of-memory handler and use user space out-of-memory notifications and do home handling in the user space. So, let's go further. We have this memory pressure C group notifier feature. It was introduced in kernel 3.10 so it's quite new. And it allows applications to receive notifications where the specific C group is running low on memory. And it is possible even if the whole system is not under memory pressure. So, we have three such levels of memory pressure defined. There are low, medium and ohm and low means that memory reclaim is happening on low level. Application practically doesn't need to do anything. Medium level means that some swapping is happening in the system and application is recommended to free not important data and ohm level which means that memory pressure is quite high and application is really free whatever it's possible. And it's a very nice feature. Later, I will tell how we are using it. And the other interesting feature is shrinker interface to control user space behavior. It was part of this memory pressure purchase, but it wasn't merged and it's still working progress and it will be nice to have it. So, I will describe how it works. It operates on concept of chunks of application defined size for example one megabyte and application passes such chunk size to kernel and then registers for notifications. Occasionally application writes to shrinker file number of chunks that it allocated in user space. It can also write negative number which means that chunks were freed and kernel keeps the counter of application used memory and when it comes time that there is memory pressure in the system the kernel can notify application to free some number of chunks and application then should free the number requested or read the number to inform of a negative number it should pass it to kernel informing it that it can free this memory. So, in this way it's possible to notify kernel about some application caches that application can free when there is a low memory situation in the system. Unfortunately there aren't any real application which were converted to this interface so I can give you any more specific examples that still work in progress but it's quite promising. Another feature which was proposed quite recently because in March 2013 is a per process reclaim and it allows user space to reclaim any target process anytime and it is important to reclaim memory before we can avoid killing the process if we reclaim memory first and developer of this feature was inspired to do it when he was playing on his mobile and he suddenly got a phone call and his game was killed game application was killed and he lost his best score and he was very unhappy with this fact so he decided to write this feature. And it is quite simple feature it just adds a new entry in pro file system under each process ID directory you can find this file and writing corresponding values to these strings to this file causes reclaim of file, baked pages or reclaim of anonymous pages or reclaim of both types there is also per address space reclaim possible which can be useful for form applications like webkit when you can specify an address and size in bytes and the kernel will try to reclaim memory at this address and of this length and there is good chance that this feature will be merged in some future kernels currently the process are at version 5 and we have similar feature per C group reclaim so we can do reclaim per control group it adds memory for reclaim attribute and user space can write number of pages that it would like to free from the C group to this file and kernel tries to free the number of pages and it can reclaim more or fail unfortunately this feature is currently knocked and it is possible to obtain similar functionality by other means but they both have some drawbacks in case of using soft limits instead of this feature we must wait for a global reclaim event and in case of moving tasks between C groups we must know the exact number of pages that we would like to free so currently this feature is knocked but maybe something similar will will show in the future because it is also quite useful and this feature was quite useful for us in our teaser memory handler but I will talk about it later so memory compression I will just go over this briefly because yesterday said Jennings had a presentation about transparent memory compression and he already covered the subject so we can have swap anonymous pages compression one is done in two ways one is using front swap kernel hooks and the other one is using block device layer and if you are using front swap functionality we must have a physical swap device which is a real problem on most mobile devices so there are some patches which change this behavior in the merge yet officially it has also some advantages over the using the block layer that we can reject poorly compressible pages and send them to real swap so using the front swap interface we can check if the patch is compressible to some good ratio if it is not there is no sense in compressing it we just should send it to real swap and not bother with consuming CPU time handling the compression free so we have also this zero arm feature using block device layer and it requires some user space configuration but it can act like a normal swap disk and we have also clean page cache pages compression and they are using clean cache kernel hooks and they act as a caching layer for clean page cache pages so when we have pages in page cache and they are no longer used instead of evicting them from the memory forever we try to compress them in case that there are needed in the future we have two different memory allocators used for compressed pages we have Zbot and ZSMalock and Zbot allows no more than two pages two compressed pages in one physical page frame this has a bad side that the maximum compression rate is 50% because we can only fit two pages, two compressed pages in physical page frame but fragmentation of pages is limited and eviction of pages is also easy and ZSMalock allocator offers us high density but it suffers from fragmentation and pages eviction is more difficult so the situation is that Zbot is now merged in 3.11 kernel and it is used by Z swap which is also in memory allocator in 3.11 kernel and ZSMalock is still in staging and we have done some real life tests of swap compression using teasing applications we have done it on Armexinov platform using previously mentioned per-profess reclaim just requesting the reclaim of anonymous pages and they were sent to swap so the compression rate was about 50% for solutions tested and it took... ok this is very good question and we've done some measurements on it and I don't have with me these measurements but I can ping you back on this we've done some measurements about this and we also tried LZ4 compression algorithm which is new in kernel T3.11 and it is about 10% faster than the default LZ4 algorithm and the compression rate is almost unchanged so if you are doing memory compression it's worth to use this LZ4 algorithm instead of the default LZT1.1 ok and what's the status of the memory compression solutions LZ4 is merged Zcash and the third version which contains only clean cache support was proposed for mainline in August and ZRAM is still in staging but there are some efforts to mainline it ok and another feature of efficient memory handling is user space low-memorcular-demon which was written some time ago and it was announced over a year ago and unfortunately it hasn't seen much development and different companies are trying to do their own different auto-memoric handling and it is quite good that it exists entirely in the user space it has no kernel parts it just uses some functionality to get low-memory notifications from the kernel and it can be from C-groups or from VM event functionality which was ancestor to mainpressure C-group notification and this user space-demon is not converted to use this final version of this mainpressure C-group unfortunately but you can use C-groups to pass notifications to it and it acts as a drop-in replacement for kernel android low-memory killer driver so it is quite nice feature and when it comes to vendor solutions we also have one but we are trying to go from mixed kernel user space solution to the only user space solution what's interesting in our solution is that processes are divided into two groups you have foreground and background processes so on actual user interface you can do distinction between processes on the ones that user see and the ones that user don't see and when there is a memory pressure situation you are just killing these processes which are in the background and it doesn't notify this so later when he wants to switch to the application that was in the background it is just launch from the start and this solution can also use compressed swap feature so when there is a low-memory situation instead of killing processes we try to compress them first and only when there is really no memory left they are killed and this is all work in progress for future tizen versions and I hope that it will evolve into user space only solution and that kernel parts will be either upstream or replaced by some upstream solutions and now a bit about kernel features that provide applications with an API that can allow applications to use memory more effectively so one of such features is volatile memory ranges it was inspired by ashmand device implementation for android and it provides a way for user space application to give hints to the kernel about memory that is not immediately in use and which can be regenerated and real life example of such applications are brokers and their cases so we are trying to use this feature for webkit application informs the kernel that a range of pages in its other space can be discarded anytime when the kernel needs memory so it is done by marking pages as volatile and when application later needs these pages again it just unmarks these pages or in other words it marks them as non volatile and if the kernel already has freed this memory when application requests it to be made non volatile the kernel will return warning value to user space informing it that the data was lost and it must be regenerated if application accesses such memory that was gone it will get a sigbath signal and sigbath signal handler can mark memory as non volatile and regenerate the content but instead of doing this the application can just check this value return and not access this memory and don't get a signal so then there are two ways to link it and this feature adds a new syscall, it's called V-range and it can be used for both anonymous and file pages and the function has four arguments address is a starting address of the memory area, length is length of the range to be marked mode can be either V-range volatile or V-range non volatile and parger is this pointer to any teacher that will be set to any data in the range being marked non volatile has been reclaimed and is lost so if the data specified in the range if the data specified in the range is okay so the feature was first proposed a long time ago because in November 2011 as an extension to POSIX FATVICE interface it was also proposed as an extension to followkate system call or mad voice system call but eventually memory developers decided that it would be best to do a new syscall and currently this set of patches is at revision 8 and maybe they will be upstream in the future but there are other positive reactions to it and similar interface is a kernel SM page merging it allows dynamically sharing identical memory pages between one or more processes memory is practically scanned by kernel daemon which identifies and merges identical pages and main user of this feature currently are systems doing a lot of virtualization stuff for example it's supported by KVM so these virtualized guests have a lot of the same memory but this memory is not shared in process 3 relationships between processes so normally when you have a process and you do a fork from it the memory of the child and parent processes are shared and in this situation when you have this virtualized guest there are parts of OS images for example and some parts are identical but you can't know it at this kernel level you must inform the kernel explicitly that there is possibility that in this memory areas the data are the same and since memory scanning uses a lot of processing power it's quite important to limit it to areas that are likely to benefit from sharing so we are not doing this scanning of memory for all available memory adjusting the applications to point kernel to this memory areas that are likely to be identical so for this we have mergeable parameter for mud vise system call to mark pages as likely candidates for merging and there is unmergeable parameter to unmark them and unshare them and it's important to not notice that unsharing pages may require more memory that is currently available because when pages were shared the copies were removed from the system and now is less memory, now is less free memory than it was before so when you do this unmerging it's possible to receive out of memory to wake up out of memory killer and receive sick kill signal from the kernel and a bit of care is needed in using this system calls and there are some references to documentation cgroups documentation is quite good there are good examples on how to set up cgroups and manage them the same with memory controller and KSEAM maybe it's time for some questions ok no, not yet, we've just started to try to use it for WebKit and it's working progress still ok do you mean using some hardware accelerated compression? it could be beneficial there is no software support do you have some specific hardware in mind? and it could be beneficial not only because it could be faster but it can draw less power it's possible that's a good idea any more questions? for example you don't always want to do a reclaim there are situations for example you have a video player running on mobile device and you would like it to use the larger amount of file caches and you wouldn't like to do reclaim on this application so it's not only about doing reclaim but also about the order of doing it so I don't know if it's unfortunate question but there are situations when it's useful so you mean that applications can get some notifications yes but it requires some adjustment to application it requires you to change the application to reserve these notifications and you can do profit free claim without any changes to existing software ok, so thank you