 Hi. I'm going to talk about flash memory. A lot of you have used flash, but I believe that anybody who works with computers should have a proper understanding of how they work. Kind of, this is a transistor. This is how we make logic gates from it, and building up into bigger and bigger sort of black boxes. So I'm going to do that with flash to a certain extent. I'm going to talk about what flash is, how it works, how we put it together, and then I'm going to talk about why it's special, what we have to do to cope with the characteristics of flash when we are trying to reliably store data on it. I'm going to talk about the methods we use to store data on flash, the translation layers, which are the common method you'll see used in consumer devices to make flash look as if it is a hard disk, and other potential approaches like file systems directly on the flash media. And I'm going to talk a little bit about the software we have in Linux, the drivers for supporting flash directly. So this is a transistor. Well, kind of. So basic field effect transistor. You put a voltage on the control gate here, and I'm not going to go into the quantum mechanics. But basically it screws with the electrical field and allows the electrons to pass between the other two gates. So you put a voltage on here, and it allows current to pass. What's different in a flash cell is we have this floating gate here, an island with, you know, insulation around it, and you pump electrons into that. And when there are electrons in it, they screw with the field some more, so you have to put higher voltage on the gate in order to allow current to pass. So basically when the cell is programmed, it has electrons in the gate, the floating gate there, and you can tell whether it's programmed or not by looking at the voltage you have to put on the control before the current will pass between the source and the drain. So that is basically a single flash cell, a single bit. Originally we would wire this up in an arrangement similar to this, nor flash. So for the bit line that you read out, or the program, any individual cell would pull that down to ground and you would see the difference. And this is kind of the physical representation of how that would be used. Nor flash chips would work very much like ROM electrically. You could read them directly, just address some data lines, and you could program individual bits basically until all the bits are cleared. The way we wire, the way we arrange flash chips, flash cells in a chip is such that you can individually program bits, and that's done by, here we get into the quantum mechanics again, you put a high voltage on the gate and a high current through between the other two terminals and that causes electrons to tunnel into the floating gate. And the way you clear it, this is normally done only in large blocks, sort of 64, 128 kilobyte blocks. You put a high voltage on the other, one of the other terminals. Don't ask me too much about the quantum mechanics. I'm going to move on from that very quickly. But basically, you can set the individual bits, but you can only clear the bits, and that's back from a logical zero to a one, in large chunks, so typically 128 kilobytes of data. It's a bit like an etcher sketch in that sense. Everybody familiar with the etcher sketch? You can draw on it until it's all black, and then the only thing you can do is wipe the entire thing. So that's how a flash erase block works. The nor flash arrangement was very useful, and we use it for BIOS and things you have to boot from, but it's not very space efficient. And so later we came up with the NAND flash arrangement, where in order to read the contents of any particular cell that you're interested in, you have to put a sufficiently high voltage on all the other cells so that they will pass current, and then you can put the sort of intermediate voltage on here, so it may or may not, depending on whether it's programmed. And so that allows you to get a much denser arrangement of flash cells in the chip. We call that a string of NAND flash, and we arrange it into erase blocks and divide those into individual pages. So a typical erase block will have 2K of data, and in order to allow us to do error correction, we'll add a bit more on the end, so typically 2-1-1-2 bytes of data in an erase block, almost 17,000 bits in a page. And so we read a whole page out into a round buffer inside the chip at a time by selecting all those cells. So I'm going to talk a little bit about the problems with losing data from flash. One of the problems is that the electrons that are programmed into the floating gate there, well, they just bugger off. Electrons are like that. And so over time, you will simply lose data. You will see bit flips just naturally happening. With repeated programming and especially erase cycles, you will also find that the insulating material breaks down, charge builds up in it, and that actually is what contributes to the limited lifetime of flash chips. You've heard that certain flash chips may have a million erase cycles. These days, it's going down by orders of magnitude, so we're talking about 100,000 erase cycles per erase block before it's just useless and you can't use it anymore, 10,000 in some of the latest chips. And that's because of the charge build up in the substrate which causes leakage to be so fast that it's useless. The other two ways that data get lost on flash chips, I'm going to talk about it in a little bit more detail with diagrams, I read disturb and write disturb. So read disturb is basically the act of reading the flash will tend to degrade the contents and write disturb will not only write to the cells that you're actually intending to write to, but they will also tend to cause bit flips in adjacent cells. So here's an example of programming a NAND flash. So you put a high voltage across, so there's 20 volts across, the numbers are mostly made up, but for example there's a 20 volt potential difference across the control gate there of this particular cell and this one, the one that we're trying to program. So this down here is a page and these bits we're leaving at 1, then naturally raised state as we program this page, and these bits are being set to 0. So these bits will be programmed, but you will see that there is also a potential difference, a smaller one across various other cells, and these cells have some probability of losing data, of seeing bit flips as we program the cells around them. So that's something like probability of 1 in 10 to the 10 for SLC flash chips, single level cell, and maybe 100 times more than that one in 10 to the 8 for MLC. I'll talk a little bit about SLC and MLC in a moment. So read disturb, it's a very similar principle. We are reading this page and we are actually putting a potential across the other cells and that's tending to cause disturbance in those cells. So basically we lose a lot of data just by programming and reading the flash. What makes it even more fun is that these days we are trying to put multiple bits of information into a single cell. So what I've talked about so far has been either there are electrons pumped into this floating gate or not. And that's, you know, if a cell is not programmed and it's left at a 1 then the voltage needed to make current pass maybe somewhere in this range and if it has been programmed to a 0 then the voltage will be higher and so you work out whether it's programmed just by looking at the voltage which is necessary in order to make the transistor pass current. In a multi-level cell you actually try and derive more than one bit of information from that but according to how much, you know, how many electrons, how much charge you have pumped into the floating gate which obviously makes it much more error prone. So in MLC cells you will see much more, many more bit flips, much more data loss. We tend to use more ECC. I mentioned earlier that for example in a 2K page flash you'll have 64 bytes of out of band extra data per page. That's mostly for ECC. In the days of SLC flash chips you could put some ECC there but you could use more of the spare area for metadata and other interesting things. With MLC chips generally because they haven't actually increased the amount of spare area you have to use all of the spare area for your ECC syndrome and you cannot use any of it for metadata. So that's the internals of the flash chip. The way it's presented to the host generally is with a fairly uniform interface. There's a little controller and ram buffer inside so you read and write pages to this ram buffer by sequencing them in on the data bus. And you can also send a command such as by using the command latch enable line you can send a byte of a command which will be in a raise or program or read page into the buffer or sequence data out there, the data bus. And also obviously some commands actually have addresses attached and so that's the standard interface for NAND flash chips. Once upon a time we would just hook this up to some GPIO lines. It didn't really matter how fast it went. You could even do it off a PC parallel port. But obviously more and more these days people care a little bit more about speed. We're not just seeing them in slow and better devices. And so we have decent controllers which will do this with DMA to host ram and even do error correction in how well they will at least generate the ECC data on write automatically and we'll check it on read. A lot of them will simply flag an error when they read a whole page and let you deal with the fixing of ECC errors in software. So these are the things we have to think about when coping with flash. I mentioned error correction. We have to cope with the fact that there will be bit flips and these may happen randomly. We will, even if there are pages that we never really care about, it's a bootloader on a machine that only gets booted once a year. You do have to go back and read those static pages periodically at run time before you have to reboot so that you can notice a bit flip and perhaps read the page, correct the error as well. There are still few enough to be corrected right out of the data elsewhere. We have to cope with bad blocks. Some blocks go bad. Some blocks are marked as bad as they leave the factory and you must never use those blocks. You must never try to erase them because the world may end if you do so according to the data sheets. So we have to handle bad block management and we also have to handle where leveling, not going through the slide in order, but that's good because you shouldn't just read the slide, should you? So we also have to handle where leveling because over time the cells break down and so start to become unusable. What you don't want to do is wear out, for example if you're using a fat file system on top of a translation layer, you do not want to wear out the erase blocks that you tend to use for your fat long before the rest of the device and thus render it unusable when it didn't need to be. You want to wear out the flash relatively evenly so that you can extend the lifetime as much as possible. One of the things that's come into play in recent NAND flash chips, well at least in the last five years rather than ten, is that you must program the pages sequentially. So within an erase block you must program the first page first and then the second etc and this is to reduce the amount of program disturb because if you were to write blocks in a random order you would tend to cause more bit flips in the surrounding blocks. So the data sheets now say that you should start at the beginning and work towards the end. Garbage collection, this comes about because of the etch-a-sketch nature of the erase block. What will happen is you filled up a block with data, some of which are still valid and useful but some data belongs to deleted files or something else and is no longer relevant. But once your whole, the whole of your flash, or hopefully slightly before the whole of your flash has been written, you've got to start thinking about making space for new writes. So what you need to do is consolidate the still valid data from these part valid, part invalid erase blocks into new erase blocks. So what you do is you copy the bits that you still want and then erase the victim erase block so that you've gained space. And the most fun we have is with MLC flash. I mentioned that you have two bits or you know sometimes more with talking about three bits per page, I think even four bits per page now, sorry per cell. They don't put those bits in the same page so that you read and write them at the same time. That would be too easy. The reason they don't do that is because basically you have to program the, you have to pump enough electrons into the floating gate to get all the way from a zero to the three level in certain cases if you're programming two zeros into the two, into the pair of bits. And that will either be slow if you have to go all the way to a three, if you have to pump three handfuls of electrons in or you have to pump it harder in which case you cause more program disturb. So what we do is we put the separate bits of information in the in the multi-level cell in separate logical pages so that when you're programming the first you only have to pump two handfuls of electrons in or none as the case may be. And then when you're programming the second you pump one or zero for the least significant bit. Which leads to great fun because when you program your first page you might think yeah my data are committed to the medium everything is happy I can return okay from my f-sync call etc. But when you later come back and program the other page of that pair if you get power failure something goes wrong when you're programming that page you can lose the information that was in the first logical page and just to make it even more fun because we love working around hardware in our software they don't even make them adjacent pages often it's page zero and three or something else and they don't always tell you which pages they are either. So yeah mlc flash not my favorite thing. So that's yeah those are the problems we have to deal with. There are two major ways we present flash in the way that we can actually store files on it. The first is you try and make it pretend to be spinning rust. This was really useful in the days of DOS where we had to provide an inch 13 disk bias handler and that was the only way to do it we couldn't really sensibly do installable file systems although we could eventually. And it also makes a certain amount of sense because you can then just put in a black box and stick it on an ID bus because he bus there are some advantages to this approach. So I'll talk a little bit about how a flash translation layer works. It is basically a file system it's a file system that kind of presents one file which gets random access to individual 512 byte sectors within that file. So this is an old example of a flash translation layer it does violate the rule I mentioned earlier that these days you mustn't write pages out of order but it's an example of how it might work. So it would have it would also be on SLC flash and it would be able to use metadata in the spare area of the flash so it would it would be able to mark each sector with the sector number that it actually represented. The logical disk was divided up into erase block size chunks virtual erase blocks and each virtual erase block of the disk was represented by a number of physical erase blocks and so I'm just going to consider one chunk then you know the first but the first say 128k of the disk. So imagine the first two sectors the boot block have been written and that's it very very simple so the data have been written and we've got a little metadata in the spare area saying you know this is sector one this is sector two and then they are overwritten and sectors one two three and four are all written and what we do is we make a chain of erase blocks so both of these physical erase blocks represent the same virtual erase block the contents of sectors one and two they can't be overwritten in the original block so they have to be written into a new block in the chain whereas sectors three and four can be written into the original physical erase block and then you know we'll write some more sectors two four and six we'll have to introduce a new block into the chain for sector two whereas the others can be written in the existing physical erase blocks eventually you kind of run out of erase blocks you have to start consolidating and so what we do is we copy all the all the data from the old erase blocks into the final one in the chain and then we can start erasing the old box in the chain so that's you know a very simple way that a flash translation layer might work and might use the flash in order to present a disk interface one of the big inefficiencies in this model is the fact that some of these sectors might not be used it's pretending to be a disk you could have filled up your file system your fat file system x2 whatever that's on that disk and then deleted all the files but the disk has absolutely no idea that you've done so so you've probably heard about the trim command it's been added recently for discard sectors where we can actually tell the the block device actually I don't care about the contents of this particular sector anymore so you can just forget about them so if you go back to the previous slide where you know before the erase blocks the chain were before the erase block chain was consolidated we could say trim I don't care about sectors one and three anymore just forget about them and so when the the chain is folded they're just gone they don't get copied into the last block and so if they are subsequently have written they are they can go straight in that block you've saved a whole physical erase block you know a whole erase on a physical erase block and it means that a lot less data have to be copied around when you're garbage collecting but that's just one example of flash translation layer quite an old one another one might be more log structured so the first right just has those two sectors and then the next four sectors are written just physically contiguously obviously there's more work to be done here to keep track of where each sector can be found on the flash when you when you start up but that's something that's handled by the file system that is either inside the black box which is an SSD compact flashcard an SD card or you can do it in software we have about six implementations of these translation layers in software in the kernel at the moment so basically what you have here is a file system on top of another file system there are a number of problems with this I mean yes it's nice and simple and it you know makes it nice and easy to provide a black box interface but it's not massively efficient it's very hard to optimize in both directions the upper level file system the butterfsxd4 whatever finds it very hard to optimize for how the underlying file system the translation layer will work so we've seen certain characteristics from old style SSDs that we've tried to work around and cope with and you know try and align our rights to the raised block and tweak our partitioning so that rights are aligned to the underlying erase blocks and we've tried to do this in file systems and then found that the next generation of the SSDs just doesn't need this anymore or in fact that it's counterproductive because we are trying to optimize for this opaque layer underneath and likewise the layer of the file system underneath cannot sensibly optimize for the file system that's going to be used on top of it it doesn't know what kind of file system is going to be used on top of it we have seen some fun things with this though we've seen compact flashcards which will assume that they're going to have fat used on them and this predates trim but they thought this trim thing that would be really cool it would be nice if we could know when certain sectors weren't used but hey look when they write to this block that's going to be the fat so we can see when they've cleared this cluster and said that it's unused and then we can discard the contents of those sectors over there how cool is that that's lovely if you're running fat on it not so lovely if you're running anything else so yeah there are problems with the layering approach of one file system on top of another another is garbage collection when we have to do the block folding or the you know other kinds of garbage collection or just well if we see a bit flip and we decide that contents of one physical arrays block need to be written out elsewhere that is an ideal time for the upper layer file system to defragment or otherwise optimize its data if it's an extent by file system it could then reorganize that data as it's being copied but if you have the layering there and you don't let the upper file system see that then you miss the opportunity to optimize the data storage as you're moving it about anyway one of the other things we really care about with garbage collection is we want to separate long-term data that's just going to sit there forever like your kernel for most normal people who don't change their kernel every day that's just going to sit there what you don't want to do is live in the same block with the a time of libc because what you want to do is keep long-term data together in blocks that don't get changed very often and short-term data together so that when you pick a block for garbage collection it's actually mostly stuff that doesn't matter anyway and you don't have much copying to do in order to do garbage collection and again the real file system can have some clue about how long individual blocks are going to get a last but the layering doesn't really offer you that and likewise transactions the lower level file system the translation layer has to provide guarantees about you know what will happen in term you know in face of power failure etc just as the upper layer file system has to provide the same or very similar atomistic guarantees and it can be very efficient to do to have both of those but actually it's not so inefficient because mostly the flush translation layers don't get that right anyway if you do any power file testing on most SSD type devices you'll just find that they crap themselves and they can't actually mount their own internal file system and so the whole thing goes south you've basically got a little brick and that's a problem because you don't actually have any access if it's a black box you know separate device like compact flashcard you don't have any access to the underlying medium to do any kind of file system check on it or recovery so you're just kind of screwed really so we you know we reckon it takes maybe five years for file system to come to maturity but our fs so how many years before people will really start trusting butter fs for their mail school who's already doing so you're mad you know who will think about doing so in a year's time two years three years yeah so we reckon it takes a number of years for a file system to really come to maturity and they were talking about an open source file system where we can poke at it diagnose it debug it and look at what it's doing on the underlying medium now put them in a black box have it written by the same crack smoking hobos they drag in off the street to write PC biases who wants to trust their mail sort of that right so there is no fundamental reason why this should be broken there are some efficiency concerns about the pretend to be spinning rust approach which are mostly that the biggest one of those is trim and once trim is actually implemented sensibly that will deal with the biggest efficiency concern in fact trim is mostly implemented quite badly at the moment it's supposed to make things go faster but it's so slow that the current implementations I think butterfs has it disabled by default even on devices which claim to support it but yeah the efficiency concerns are there the reliability that is not a fundamental issue it should not be like that it's just that in practice it always has been a problem so the other approach is to have a file system directly on the flash now I've actually spoken in Brisbane at lca on the javis to flash file system a number of years ago I'm not going to go into too much detail about that but I'll talk a little bit about the support we have in Linux for dealing with flash directly so the support we have at the moment is showing its age a little bit we have synchronous read and write API's because in the old days it was all bit banging and nobody really cared about you know DMA and queuing we need to fix that we're working on a new API that looks a bit more like the block API in that you just queue up a whole bunch of things you want it to do and it tells you when it's done it did have an asynchronous arrays because that could take seconds back then it handles the error correction and has various methods for dealing with different types of software and hardware ECC and it handles bad block management so one of the things you'll find in NAND flash is that one of the bytes or some of the bytes in the spare area will be zeroed when you get a virgin flash from the factory and that's how bad blocks are marked but we don't want to keep that as the way we mark we remember bad blocks for you know all the time that we're using the flash partly because that involves going and looking at the spare area of every block and partly because it means you can't use that byte you know byte three or whichever it is for data because if it ever actually got normally written with a zero then that would suddenly look like a bad block next time next time you scan it so what we do is we set aside a few arrays blocks for redundancy and we have a table and when we take a virgin flash we make a bad block table on the flash remembering which blocks are bad and we can mark blocks bad later as they start to misbehave by updating that table on the flash and so the the MTD support in Linux provides all of this functionality for for users of MTD devices to to take advantage of and also recent a more recent development from IBM and Nokia is Ubi for unsorted block images it's kind of like LVM on steroids for the flash interface so it offers two main interfaces it gives you static volumes which you might use for bootloader finding your kernel they are atomically overwriteable in much the same way as files on a POSIX file system in that you write a new volume and then rename it over the top of your original one so you write a new kernel and then rename it to kernel and then you've got a safe method of updating the kernel volume it does we're leveling on that and we'll and we'll do the scrubbing that's required to test for bit flips in in data as I mentioned we needed earlier and then perhaps more interestingly we have the dynamic volumes which present which do a certain amount of the underlying translation that that flash translation layers do but they don't force you into the holes you know atomically overwriteable 512 byte sector thing so it's basically gives you a simple logical to physical mapping so you deal with logical arrays blocks and Ubi under the covers maps that onto a physical arrays block and when you say I want to erase this logical arrays block it only needs to unmap it and it doesn't have to erase it immediately erases can be quite slow just have to say okay there is the physical arrays block associated with that logical block anymore and we have Ubi FS a file system written a Nokia shipped on the N900 which makes use of this and provides a proper file system on top and I'm also looking and I know I've been saying this for a while but I will get some time to spend on this this year honest at ButterFS directly on the Ubi interface so ButterFS also already has a fairly good abstraction for its underlying storage which allows it to do raid and other things internally and it should be quite useful to do ButterFS on Ubi. ButterFS also already has the copy-on-write mechanism it doesn't overwrite data in place so I was going to talk briefly about my ideal hardware wish list but somebody is holding up a cut sign at me so I'm going to go ignore him and carry on anyway I want fast queued to DMA transfers so you know not the bit banging not even sort of synchronous function I want full error correction in the hardware I don't want to have to do the the retell them encoding myself to cope with errors because they do happen often enough it's not that much of a slow path that should never happen I want scrubbing I want to be able to tell the hardware just read this page don't give me the data just tell me if the ECC works and then I'll cut I'll cope if you know tell me tell me if something goes wrong tell me if a bit flipped and I want a page copy a lot of flash chips have an internal copy command so you can say read the contents of this page write it out there this is completely pointless if you're not actually going to check and correct the ECC so we can't really use that if we're being sane so what I really like to see is some updated firmwares for some of the SSD devices which give us this kind of interface give us something very similar to the UBI interface that we can put a proper file system on efficiently and now I think I have come to the end so do we have questions me one for me first I've heard that the ware leveling on some of these devices is pretty stupid and some of the learning steps would like it taken off by default from the developers of these devices comments yes so often you'll find that the ware leveling only happens within a certain chunk so you've got a sort of a few gigabytes of compact flash device for example and yet the ware leveling only happens within a certain sort of 64k chunk we know probably a bit more than that you know a megabyte chunk so if you put fat on it for example you will get ware leveling across the first megabyte of your compact flash but only across the first megabyte so you will still wear that megabyte out much more quickly than the rest of the flash and you it's very ineffective where leveling but again you can't tell we've seen some very silly things when we've actually taken these things apart and hooked up logic analyzers to see what they do on the flash one particularly impressive what it's doing garbage collection would read from the sector that was you know the victim block into RAM then it would copy pick another victim block that needed garbage collecting copy the valid data from there into the block this first victim block that it had just erased then it would erase the second victim block and only then would it write back the valid data from RAM into the second victim so it moved data around it wasn't just you know doing it wouldn't end up just moving data from one block to another and wearing those blocks out it's a kind of a valid technique for ware leveling but perhaps you want to do it without quite such a huge race window for losing power and just losing data trim marks a block as not needed anymore is there an untrimmed or what happens what happens after you try to read a trimmed block does that market is dirty again or what happens when you try to read a trimmed why would you try to read a trimmed block yeah that's the status committees have said that the the contents are undefined I think but they don't want it to always receive read the same data afterwards so if you trim a block and read it once it doesn't matter what you get back you're not guaranteed not to get the original data trim can do nothing it's an optimization it's not for secure delete but if you read if you trim and then you read some data you might get all zeros you might get the copying file I don't care it doesn't matter but whatever you get the first time you read after trim you shall get it the second time it shouldn't just change randomly that that's the only thing that you can really get no I'm not sure if you've come across it like those recently a quite popular phone android phone that shipped with a file system that made it to be well it caused lag issues just because they picked one file type of file system to use for the internal flash storage versus another is it good ones to use and the ones you should never use and never use jfs2 it is obsolete use something else instead I would mostly say ubi fs these days android often use yafs2 there are issues with that it's not upstream if you have the engineering if you want to spend your time dealing with code that isn't upstream then that's fine but it's certainly not something I would normally recommend but yeah ubi fs I would normally suggest and once I've pulled my finger out butter fs on on ubi so Intel make SSDs Intel SSDs saner than other brands would you yourself use an SSD and if so which brands would you be more likely to use over other brands next I use this day I use an Intel SSD it's very fast it works very nicely I keep git trees on it it builds fast I don't keep my milestone on it do you want me to want some more yeah some of them are better than others there is no fundamental reason why they have to be unreliable the Intel ones even if they weren't paying me I would probably concede that they are amongst the best they do a fairly good job of making this stuff work right yeah yeah so yeah some of them are better than others one problem though is you can't necessarily rely on that I was in Boston a few weeks ago talking to LPC engineers and they're using sd these days and they'd actually managed to find some sd cards which pass some sort of reliability and power file testing they say yeah absolutely brilliant let's have some more of these the next batch they're not so good so they took them apart completely different hardware same manufacturer model everything just a different batch it's a black box you cannot trust it from the point of view from an end user who can't really know whether to trust the that black box can you recommend any utilities that can be used say to do write tests on a machine and send the results over a network and then randomly pull the power out to kind of off the top of my head no but if you send me email I can hook you up with with that kind of thing yeah certainly we have done that kind of power file testing on java's to a new bfs and on various sort of complex flash and other types of devices and these scripts exist do any of the the flash devices do soft detection error correction viterbi codes or something where you've got an analog signal coming out of the flash read if the if the ones and the zeros start blurring into each other soft detection can let you sort it out with the error correction code but only if you still have the analog signal I'm not sure about that I'm not sure how it would work bear in mind if you go back to the mlc diagram how do you tell the difference I suppose you can tell that it's kind of on the borderline between a yeah I do not know last question it just wanted to ask so I guess what I'm kind of getting what I'm hearing from out of all this is an end user that I'm having my operating system and apps run on an SSD so I can get fast execution which I don't really care about because I can just reload that operating system and I don't really care if I lose it is perfect and that's going to give me that extra boost but storing my actual data and stuff I should still be using a side ID drive is kind of what I'm hearing that yeah I think so no no that there are problems with all kinds of data storage you've got to do you've got to do backups you've got to check your backups work but yeah at this point I would certainly suggest that you should be more careful about checking your backups work if you are storing your mail spool on any kind of flash device and it's not entirely clear that you know our file systems are active on flash a perfect either but at least if they screw up that's our fault and we can fix it and that's all we have time for can we put our hands together for our speaker