Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
nicklmg

WTF is this thing? - RAM on a PCI Card??

Recommended Posts

Wouldn't mind a modern version of something like this with DDR4. Not for any kind of practically, just for the insanity of it.


Guides & Tutorials:

How to Format Storage Devices in Windows 10

A How-To: Drive Sharing in Windows 10

VFIO GPU Pass-though w/ Looking Glass KVM on Ubuntu 19.04

A How-To Guide: Building a Rudimentary Disk Enclosure

Three Methods to Resetting a Windows Login Password

A Beginners Guide to Debian CLI Based File Servers

A Beginners Guide to PROXMOX

How to Use Rsync on Microsoft Windows for Cross-platform Automatic Data Replication

 

Guide/Tutorial in Progress:

How to Access your Private Servers from anywhere with Pritunl

 

In the Queue:

[Taking Suggestions]

 

Don't see what you need? Check the Full List or *PM me, if I haven't made it I'll add it to the list.

*NOTE: I'll only add it to the list if the request is something I know I can do.

Link to post
Share on other sites

16 hour refresh time

 

MOM YOU DON'T UNDERSTAND! I HAVE TO TURN IN MY PC TODAY!


PLEASE QUOTE ME IF YOU ARE REPLYING TO ME
LinusWare Dev | NotCPUCores Dev

Desktop Build: Ryzen 7 1800X @ 4.0GHz, AsRock Fatal1ty X370 Professional Gaming, 32GB Corsair DDR4 @ 3000MHz, RX480 8GB OC, Benq XL2730 1440p 144Hz FS

Retro Build: Intel Pentium III @ 500 MHz, Dell Optiplex G1 Full AT Tower, 768MB SDRAM @ 133MHz, Integrated Graphics, Generic 1024x768 60Hz Monitor


 

Link to post
Share on other sites

A RAM drive isn't a too stupid idea, even today.
 

The main advantage of a RAM drive over an SSD or hard drive is for scratch disks and other temp and project drives.

RAM doesn't really wear out over time, unlike Flash media that can only take a couple of thousand write cycles.

RAM is also ridiculously fast, both in terms of bandwidth and latency, unlike hard drives.

 

So if you need tons of bandwidth, and rewrite data all the time, then a RAM drive isn't a bad idea.

 

But most places where a RAM drive would make sense, one though might have a computer with support for 128 GB of system RAM at the minimum. (Not to mention servers where each CPU can take hundreds of GB...) So could just use some of that system memory as a RAM drive through the use of software.

 

So the only last advantage that a dedicated RAM drive has left is the fact it can have its own backup power, and doesn't empty on a reboot. (well, a software RAM drive can be mapped to a hard drive, so that it stuff its contents into it on power down, but with the downside that it all needs to be read back on boot...)

RAM drives are fairly niche devices to say the least.
But interesting non the less.

Link to post
Share on other sites
1 hour ago, Windows7ge said:

Wouldn't mind a modern version of something like this with DDR4. Not for any kind of practically, just for the insanity of it.

 

1 hour ago, Genwyn said:

Same reason it’s not done today, there could be an eight ddr4 slot pcie card but what are you gonna do with it? Put a bunch of ddr4 on it and have basically a really expensive pcie SSD?

 

Why would anyone waste DDR4 on that? The question, as @4l45t0r pointed out, is whether there is scope for DDR3 to be recycled, using a PCIe interface this time. For example, get ECC DDR3 from decommissioned servers and build a ZIL/SLOG device for ZFS out of it.

Maybe it doesn't make any sense still (how much would the card cost? How much does an SSD of comparable capacity cost?), but that's the kind of question this video instills on me.

Link to post
Share on other sites

RAM drive. 

Not really any reason to get one when you could use Optane for way less $$$. 

My 1.5TB optane drive cost around $900. 1TB DDR4 RAM costs around $100/32GB or around $4800 for 1.5TB. You lose the data if the power is cut and you'll be limited by the PCIe interface.


R9 3900x; 64GB RAM | RTX 2080 | 1.5TB Optane P4800x

1TB ADATA XPG Pro 8200 SSD | 2TB Micron 1100 SSD
HD800 + SCHIIT VALI | Topre Realforce Keyboard

Link to post
Share on other sites
6 minutes ago, SpaceGhostC2C said:

Why would anyone waste DDR4 on that? The question, as @4l45t0r pointed out, is whether there is scope for DDR3 to be recycled, using a PCIe interface this time. For example, get ECC DDR3 from decommissioned servers and build a ZIL/SLOG device for ZFS out of it.

Maybe it doesn't make any sense still (how much would the card cost? How much does an SSD of comparable capacity cost?), but that's the kind of question this video instills on me.

I'm not certain what answer you're looking for from me.

Are you making a counter argument?

 

Also yes a ZIL/SLOG like this still wouldn't make sense. The use cases for ZIL/SLOG aren't broad enough to be worth it for people who would want to build one out of RAM. The people who would need one would have the funds for a proper PCI_e SSD.


Guides & Tutorials:

How to Format Storage Devices in Windows 10

A How-To: Drive Sharing in Windows 10

VFIO GPU Pass-though w/ Looking Glass KVM on Ubuntu 19.04

A How-To Guide: Building a Rudimentary Disk Enclosure

Three Methods to Resetting a Windows Login Password

A Beginners Guide to Debian CLI Based File Servers

A Beginners Guide to PROXMOX

How to Use Rsync on Microsoft Windows for Cross-platform Automatic Data Replication

 

Guide/Tutorial in Progress:

How to Access your Private Servers from anywhere with Pritunl

 

In the Queue:

[Taking Suggestions]

 

Don't see what you need? Check the Full List or *PM me, if I haven't made it I'll add it to the list.

*NOTE: I'll only add it to the list if the request is something I know I can do.

Link to post
Share on other sites
7 minutes ago, comander said:

RAM drive. 

Not really any reason to get one when you could use Optane for way less $$$. 

That's the question, though: what would it cost?

 

7 minutes ago, comander said:

My 1.5TB optane drive cost around $900. 1TB DDR4 RAM costs around $100/32GB or around $4800 for 1.5TB.

But again, the point of this type of card, even back then, was to re-purpose older RAM, not waste new one. So what matters is the cost of the card itself plus the forgone selling price of used DDR3.

 

7 minutes ago, comander said:

You lose the data if the power is cut

which wouldn't be relevant for use cases in which RAM-sized drives make sense, though.

 

7 minutes ago, comander said:

and you'll be limited by the PCIe interface.

As with any nvme drive?

 

1 minute ago, Windows7ge said:

I'm not certain what answer you're looking for from me.

None? Did I ask you a question? If so, then the answer to that question, I guess. But mostly from anyone, rather than you specifically :P 

 

1 minute ago, Windows7ge said:

Also yes a ZIL/SLOG like this still wouldn't make sense. The use cases for ZIL/SLOG aren't broad enough to be worth it for people who would want to build one out of RAM. The people who would need one would have the funds for a proper PCI_e SSD.

Not so sure about that. Plenty of people re-purpose old hardware as FreeNAS boxes (regardless of what ixsystem's forum snobs may tell them about the idea xD), often with no SLOG at all. It seems to me that for such folks it all boils down to whether it's a $200 card or a $20 card.

Link to post
Share on other sites
1 hour ago, Windows7ge said:

Wouldn't mind a modern version of something like this with DDR4. Not for any kind of practically, just for the insanity of it.

I'd prefer like 8 DIMMs of DDR3 Registered. I have SOOOOOOO MUCH of that lying around. I think one of the major selling points of the I-RAM (yes, I had one) was being able to use recently retired RAM that I had lying around.

Link to post
Share on other sites
3 hours ago, SpaceGhostC2C said:

That's the question, though: what would it cost?

 

But again, the point of this type of card, even back then, was to re-purpose older RAM, not waste new one. So what matters is the cost of the card itself plus the forgone selling price of used DDR3.

 

which wouldn't be relevant for use cases in which RAM-sized drives make sense, though.

 

As with any nvme drive?

For what it's worth I'm thinking of an idealized PCIe RAM drive. This unit was on PCI (slower than SATA) and used DDR1 RAM. 


You could use a 32 or 64GB optane drive for a NAS and have "good enough" performance with WAY WAY less BS and lower power draw. One thing to keep in mind - your IOPS will also be limited by wire speed.

As far as why I'm bringing up PCIe issues... you won't get full RAM speed due to the PCIe bus. The RAM drive will be faster but the downsides (even if everything were free) arguably outweigh the real world performance benefits.

 


R9 3900x; 64GB RAM | RTX 2080 | 1.5TB Optane P4800x

1TB ADATA XPG Pro 8200 SSD | 2TB Micron 1100 SSD
HD800 + SCHIIT VALI | Topre Realforce Keyboard

Link to post
Share on other sites

A RAM drive in the modern world makes sense for applications that uses scratch disks, and generally uses it at high bandwidths, while working on semi small datasets. Normally this RAM drive will be in main RAM, and simply be a software solution. Now few applications have much benefit from this, other then scratch disks for video editing, caching highly trafficked database servers, temporarily storing incoming sensor data for the LHC and such.

But normally, SSDs will be good enough for most users, even in the enterprise sector. RAM drives only start becoming logical when one trashes SSDs fast enough for RAM to look cheap in comparison. Literally rewriting the data every few minutes or so, for months on end.

As an example, one might need a few tens of GB of storage, and rewrite all of it twice a minute on average, (or every few seconds in some applications). Then a typical SSD wouldn't last more then a week or three. And if one runs this workload for years, then RAM drives are a very cost effective solution. Now most SSDs don't have a few tens of GB of storage, so it would take more time to wear out, but this doesn't give us all that much more life out of it for something that will run for years. (Not to mention the time spent paying a technician to change the SSDs every month or so...)

The I-RAM storage device were though a cool idea at the time when SSDs were still a lofty dream sometime in the future. RAM were simply "cheaper" and more reliable at the time. Not to mention that people switched from one type of RAM to another, meaning that the I-RAM had access to what most people considered useless crap...

A Product like the I-RAM could though still be interesting for scratch disk applications on systems that doesn't support more then 32 or 128 GB of RAM. After all, stuffing eight 16 GB DDR3 RAM modules onto a card giving us a 128 GB scratch disk would be interesting for some applications where one would burn through SSDs way too fast to be economical. But yes, PCIe v.4 isn't the fastest thing in the world, but we could still rewrite those 128 GB of data about 17280 times a day, or once every 5 seconds. So if we have a workload doing even a quarter of that, most 128 GB SSDs wouldn't survive all that long. And you might run the workload for weeks, or indefinitely.

It is though a fairly niche use case to say the least.

edit:
Having a RAM drive card though has the advantage that we can supply it with its own redundant power, meaning that if our computer doing the work decides to crash, then we have not lost our data.

A UPS can save us from the normal power outages and give the system enough time to put the data to disk, but total system crashes tends to be less forgiving if you use a software RAM drive.... (Yes, a RAM drive is obviously not a redundant backup solution for archival storage.)

Link to post
Share on other sites
39 minutes ago, SpaceGhostC2C said:

Not so sure about that. Plenty of people re-purpose old hardware as FreeNAS boxes (regardless of what ixsystem's forum snobs may tell them about the idea xD), often with no SLOG at all. It seems to me that for such folks it all boils down to whether it's a $200 card or a $20 card.

The problem is ZFS already uses RAM as a read cache and a ZIL only accelerates synchronous writes (ie virtual machines/databases). Which would make such tech useless for most FreeNAS users.


Guides & Tutorials:

How to Format Storage Devices in Windows 10

A How-To: Drive Sharing in Windows 10

VFIO GPU Pass-though w/ Looking Glass KVM on Ubuntu 19.04

A How-To Guide: Building a Rudimentary Disk Enclosure

Three Methods to Resetting a Windows Login Password

A Beginners Guide to Debian CLI Based File Servers

A Beginners Guide to PROXMOX

How to Use Rsync on Microsoft Windows for Cross-platform Automatic Data Replication

 

Guide/Tutorial in Progress:

How to Access your Private Servers from anywhere with Pritunl

 

In the Queue:

[Taking Suggestions]

 

Don't see what you need? Check the Full List or *PM me, if I haven't made it I'll add it to the list.

*NOTE: I'll only add it to the list if the request is something I know I can do.

Link to post
Share on other sites

This particular ram drive receives power from 3.3v stand-by (there's pins on the PCI slot), so while the pc is plugged in, it's powered.

You need battery only for when pc is completely powered off and the power cable is unplugged.

 

Yeah, like the video says, the problem of this card was the use of FPGA instead of a dedicated chip and the 150 MB/s limit of the SATA 1 connector (if I remember correctly)

 

They could have made a custom controller chip with just two ddr controllers inside, some cache memory and the sata controller and use 4 so-dimm sticks. Then, get a 4 port sata raid controller and configure it in raid to show up as single drive

 

Link to post
Share on other sites
5 hours ago, Den-Fi said:

I'd prefer like 8 DIMMs of DDR3 Registered. I have SOOOOOOO MUCH of that lying around. I think one of the major selling points of the I-RAM (yes, I had one) was being able to use recently retired RAM that I had lying around.

I'd like to see one for DDR4 just because I have 64GB DIMMs.


Guides & Tutorials:

How to Format Storage Devices in Windows 10

A How-To: Drive Sharing in Windows 10

VFIO GPU Pass-though w/ Looking Glass KVM on Ubuntu 19.04

A How-To Guide: Building a Rudimentary Disk Enclosure

Three Methods to Resetting a Windows Login Password

A Beginners Guide to Debian CLI Based File Servers

A Beginners Guide to PROXMOX

How to Use Rsync on Microsoft Windows for Cross-platform Automatic Data Replication

 

Guide/Tutorial in Progress:

How to Access your Private Servers from anywhere with Pritunl

 

In the Queue:

[Taking Suggestions]

 

Don't see what you need? Check the Full List or *PM me, if I haven't made it I'll add it to the list.

*NOTE: I'll only add it to the list if the request is something I know I can do.

Link to post
Share on other sites
7 hours ago, Genwyn said:

In an era of dumb ideas with computer parts this was one of the dumbest.

For years prior tech forums were all “why doesn’t anyone put ram on a pci card?” Thinking it would make this amazing expansion option for old ram to be used as a ramdisk or system ram.

 

Which turned out to be nearly useless and really expensive.

You would put your 1-2gb of ram on a 100$ expansion card and have a super fast... 1-2gb of storage space.


Same reason it’s not done today, there could be an eight ddr4 slot pcie card but what are you gonna do with it? Put a bunch of ddr4 on it and have basically a really expensive pcie SSD?

yeah

Link to post
Share on other sites

I had used my ACARD ANS-9010BA with 16GB DDR2 non-ECC + 16GB CF card for backup in my old rig with 32-bit OS, it was for temporary file, pagefile, Firefox profile folder, etc. It was very good :) This ACARD supports up to 64GB DDR2 unbuffered ECC.

 

2032542948_ACARDANS-9010BAwith16GBDDR2.thumb.jpg.aa2974443c6fe0049cb0ae7ca88aea71.jpg


PC #1 : Gigabyte B150M-HD3 DDR3 | i7-7700 | 16GB DDR3L 1600 | LSI SAS 9211-8i | 240GB NVMe M.2 PCIe PNY CS2030 | HDDs 41.5TB total | Quantum LTO5 HH SAS drive | Corsair HX750i | Cooler Master Stacker STC-T01 | ASUS TUF Gaming VG27AQ 2560x1440 @ 60 Hz (plugged HDMI port, shared with PC #2) | Win10
PC #2 : Gigabyte MW70-3S0 | 2x E5-2667 v3 | 32GB DDR4 ECC Reg 2133 | Gigabyte GeForce RTX 2080 SUPER Gaming OC 8G | 6x 120GB SSD SATA RAID0 | Seasonic Prime Ultra 850 Titanium | Lian Li PC-A77 | ASUS TUF Gaming VG27AQ 2560x1440 @ 144 Hz (plugged DP port, shared with PC #1) | Win10
PC #3 : Mini PC Zotac 4K | Celeron N3150 | 4GB DDR3L 1600 | 250GB M.2 SATA WD Blue | Sound Blaster X-Fi Surround 5.1 Pro USB | Samsung Blu-ray writer USB | Genius SP-HF1800A | TV Panasonic TX-40DX600E UltraHD | Win10
PC #4 : ASUS P2B-F | PIII 500MHz | 512MB SDR 100 | Leadtek WinFast GeForce 256 SDR 32MB | 2x Guillemot Maxi Gamer 3D² 8MB in SLI | Creative Sound Blaster AWE64 ISA | 80GB HDD UATA | Fortron/Source FSP235-60GI | Zalman R1 | DELL E151FP 15" TFT 1024x768 | Win98SE

Laptop : Lenovo ThinkPad T61p | T9500 | 4GB DDR2 667 | Quadro FX 570m | 120GB SSD OCZ Vertex 2 | 15.4" TFT 1920x1200 | Win10

PC tablet : Fujitsu Point 1600 | PMMX 166MHz | 160MB EDO | 20GB HDD UATA | external floppy drive | 10.4" DSTN 800x600 touchscreen | AGFA SnapScan 1212u blue | Win98SE

Laptop collection #1 : IBM ThinkPad 340CSE | 486SLC2 66MHz | 12MB RAM | 360MB IDE | internal floppy drive | 10.4" DSTN 640x480 256 color | Win3.1 with MS-DOS 6.22

Laptop collection #2 : IBM ThinkPad 380E | PMMX 150MHz | 80MB EDO | NeoMagic MagicGraph128XD | 2.1GB IDE | internal floppy drive | internal CD-ROM drive | Intel PRO/100 Mobile PCMCIA | 12.1" FRSTN 800x600 16-bit color | Win98

Laptop collection #3 : Toshiba T2130CS | 486DX4 75MHz | 32MB EDO | 520MB IDE | internal floppy drive | 10.4" STN 640x480 256 color | Win3.1 with MS-DOS 6.22

And 5 others computers (2 Apple classic, 1 mini PC WinXP and 2 PC pocket WinCE)

Link to post
Share on other sites

I would still need such a Device. I run currently a Virtual Ram Disk with the Software in it. The constantly read and write Data to it like a DB.

Are there any affordable products on the Marked right now?

Thanks


From AT. :x

Link to post
Share on other sites

Wow I was just thinking about this concept a few days ago. ?

But what I thought was more in the lines of: if a Optane-like device were made using GDDR5/GDDR6 chips would it be faster? Or what if a SSD were made with GDDR5/GDDR6 chips…?

Or not only is it not really viable (due to reasons explained in this video) but also too expensive… ?

Link to post
Share on other sites

The memory chips for video cards (GDDR5 or better) are optimized for big transfers as in the controller requests some data and you get the speed if you transfer let's say 128-512 KB burst of data.

If you try to read just 512 bytes (a sector size) or something small, the memory chips will be slower so you won't get the performance you're thinking of. Same for writing small amounts of data to them.

If you copy or write long files it will be fast.

 

I think the maximum individual chip is 2 GB (16gbit). Most video cards have 8-12 GB because they use up to 8-12 x 1 GB chips, as these are available in higher volume.

The chips themselves are expensive... not sure of the price now, I see 1 GB (8gbit) Micron GDDR5 in a store for 8$ at 2000pcs but in reality a company probably buys such chips for less than 5$ each.

They consume more power, but I suppose you could lower the power if you don't need the speed (ex run chips at 1000 Mhz instead of 2000 Mhz)... by default I think the consumption is somewhere around a bit less than 1w per chip, and the memory controller inside the chip also consumes a few watts.

It takes a lot of silicon chip area to implement the memory controller to handle a bunch of these chips so any such controller chip would be expensive. And you also need specialized circuit board with 6 or more layers which would make it expensive.

Also because they work at so high speed they need to be very close to the memory controller, which means you can't use LOTS of them to make bigger capacity drives cheap... i suppose you could have tiny sata <- memory controller with 8 gddr5 chips around it and then have 2..4 or more of these on a circuit board with a 4+ port sata controller in the middle connecting them together in a raid0 or something like that.

 

End of the day, you wouldn't be able to keep the memory fresh when pc is off, you'll need a LOT of power to keep the data... a good design would use a cheap 32-64 GB ssd and have enough charge in the battery to power the ram and ssd for 5-10 minutes and dump the 32-64 GB of data from gddr5 to the SSD in around 100-300 seconds (ex 200 MB/s x 5s x 64s for 64 GB = ~ 320 seconds or 6 minutes.

 

They could make a device like this with HBM2 memory chips, those go up to 16GB per chip and the standard allows for up to 12 dies in a stack ( 24GB if made with 2GB x 12 stacks).

They could make a base chip which would have support for 8 such chips, so you'd get 64-128 GB in a very small space but it would be stupid expensive... 8 GB HBM2 chips are maybe 2.5x more expensive than 8 x 1GB chips ... ex maybe 80$ a chip vs 30$... and it's more expensive to connect these chips to your silicon chip.

Link to post
Share on other sites

A DDR3 to PCIe4x16 would be very useful.


Specs: Motherboard: Asus X470-PLUS TUF gaming (Yes I know it's poor but I wasn't informed) RAM: Corsair VENGEANCE® LPX DDR4 3200Mhz CL16-18-18-36 2x8GB

            CPU: Ryzen 7 2700X @ 4.2Ghz          Case: Antec P8     PSU: G.Storm GS850                        Cooler: Antec K240 with two Noctura Industrial PPC 3000 PWM

            Drives: Samsung 970 EVO plus 250GB, Micron 1100 2TB, Seagate ST4000DM000/1F2168 GPU: EVGA RTX 2080 ti Black edition @ 2Ghz

                                                                                                                             

Link to post
Share on other sites

The I-RAM was a niche product from an era where SSD's were just on the horizon.  In that context, it has its place in the market until the arrival of SSDs.  Capacity is what eventually killed it as the I-RAM still had better IOPs and latency than the first generation of SATA2 SSDs (consumers have forgotten just how bad the initial wave of SSDs were compared to their modern counter parts).  The consumer/gamer usage for the I-RAM back in the day wasn't exclusively for the OS, but rather for holding all the data for a MMORPG.  I can't recall what game in particular but it was one that as soon as all the assets would load, the player was immediately through into the dungeon instance.  This gave players with the fastest storage a clear advantage and the I-RAM was the premium way to do it.

 

Those looking for more modern solutions, there were DDR2 based solutions as @X-System points out as well as a DDR3 based solutio (and SATA version). Gigabyte even had a drive bay, DDR2 revision in the works they showed off at trade shows but I don't believe they ever brought it to market.

 

For servers, this was an idea that also existed.  Texas Memory Systems built giant SAN devices powered by multiple DRAM channels.  Their implementation is probably the only real world use-case where RAID3 is leveraged and makes sense.  Much like I-RAM, the SSD market pretty much pushed the DRAM based solution out of the market.  (Texas Memory System was purchased by IBM and moved some of that technology into their POWER and mainframe lines.)

 

For mainstream servers and storage, the idea of DRAM based storage to obtain the maximum performance and lowest latency storage is being revisited.  The main driving factor for this retrospective is that DRAM prices are currently down while obtaining high memory capacity systems are increasing in price, see the $3000 USD premium on high memory support Xeon SP models on top of the already high prices for them.   A device like the I-RAM could easily resurface as a U.2 device with DDR4 as an alternative to NAND based NVMe drives.  I do see this opportunity as narrow when PCIe 4.0/5.0 based NAND NVMe drives arrive and able to saturate those links.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×