Jump to content

More bandwidth than your storage capacity! Flagship RTX 50 Series card rumoured to have GDDR7 memory and 384 bit bus

filpo

Summary

Leaker Kopite7kimi hints at the flagship Blackwell card using GDDR7 and 384 bit bus 

Quotes

Quote

 

Quote

However, today's topic discusses the memory configuration which according to Kopite7kimi is going to make sure of the latest GDDR7 memory standard. The GDDR7 memory will evolve the GDDR6 & GDDR6X standard with even faster pin speeds and denser capacities. The initial dies utilize up to 24 Gb modules and and up to 32 Gbps speeds. These are the ones that will debut in 2024. There will be an even faster revision which is planned for 2026 but it is unlikely that NVIDIA would wait that long to use the latest memory technologies. Rather, that would be used by a new or a refreshed family.

Calculated Bandwidth:

Quote

Following is the bandwidth the 32 Gbps pin speeds would offer across multiple bus configurations:

  • 128-bit @ 32 Gbps: 512 GB/s
  • 192-bit @ 32 Gbps: 768 GB/s
  • 256-bit @ 32 Gbps: 1024 GB/s
  • 320-bit @ 32 Gbps: 1280 GB/s
  • 384-bit @ 32 Gbps: 1536 GB/s
  • 512-bit @ 32 Gbps: 2048 GB/s

Similar Founders Edition as the leaked '4090 Ti' prototype

Quote

Kopite also states that NVIDIA's GeForce RTX 50 Flagship GPU will feature a Founders Edition design similar to the leaked "GeForce RTX 4090 Ti" GPU which has appeared on several occasions. The graphics card features a quad-slot cooling solution with a unique side-mounted PCB design & power cables that are routed through large copper links around the body. If the same design will be used for RTX 50 GPUs, then we can definitely expect design updates in the final model versus the prototype solution that you can see below.

Assumed launch + showcase

Quote

The information also mentions that the GB202 can arrive as early as next year. The GB203 could utilize a 256-bit bus interface while the GB204 and GB205 GPUs are said to be mutually exclusive.

 

Yesterday, RedGamingTech also highlighted similar specifications which are now verified by Kopite so it looks like things are really shaping up for NVIDIA's next-gen lineup although the company is also prepping up its GeForce RTX 40 "SUPER" Refresh which will debut in January 2024 at CES and launched the same month if everything goes as planned. We definitely shouldn't expect anything official on the RTX 50 GPUs this early from NVIDIA but we can expect more updates and leaks down the lane as we get closer to the next-gen gaming family.

Expected release and chip names

Quote

image.png.4fa519c2a1c7035016287de1733b875d.png

My thoughts

Could be interesting to see how this changes the productivity side of things. It would be cool to see GDDR7 integrated in a mass market product quite early. Though we can only hope 

 

Sources

RedGamingTech on X: "So Kopite7 thinks its 384-bit bus and GDDR7 too, which matches my rumor So that's about 1.5TB/s bandwidth, depending on R7 clocks." / X (twitter.com)

NVIDIA GeForce RTX 50 Flagship Gaming GPU Rumored To Feature GDDR7 Memory & 384-bit Bus (wccftech.com)

RTX 50 Specs & Performance UPDATE | Nvidia's NEXT LEAP - YouTube

NVIDIA RTX 50 "Blackwell" GB202 GPU rumors point towards GDDR7 384-bit memory - VideoCardz.com

Message me on discord (bread8669) for more help 

 

Current parts list

CPU: R5 5600 CPU Cooler: Stock

Mobo: Asrock B550M-ITX/ac

RAM: Vengeance LPX 2x8GB 3200mhz Cl16

SSD: P5 Plus 500GB Secondary SSD: Kingston A400 960GB

GPU: MSI RTX 3060 Gaming X

Fans: 1x Noctua NF-P12 Redux, 1x Arctic P12, 1x Corsair LL120

PSU: NZXT SP-650M SFX-L PSU from H1

Monitor: Samsung WQHD 34 inch and 43 inch TV

Mouse: Logitech G203

Keyboard: Rii membrane keyboard

 

 

 

 

 

 

 

 

 


 

 

 

 

 

 

Damn this space can fit a 4090 (just kidding)

Link to comment
Share on other sites

Link to post
Share on other sites

Okay but Blackwell is a strong architecture name. 

"Put as much effort into your question as you'd expect someone to give in an answer"- @Princess Luna

Make sure to Quote posts or tag the person with @[username] so they know you responded to them!

 RGB Build Post 2019 --- Rainbow 🦆 2020 --- Velka 5 V2.0 Build 2021

Purple Build Post ---  Blue Build Post --- Blue Build Post 2018 --- Project ITNOS

CPU i7-4790k    Motherboard Gigabyte Z97N-WIFI    RAM G.Skill Sniper DDR3 1866mhz    GPU EVGA GTX1080Ti FTW3    Case Corsair 380T   

Storage Samsung EVO 250GB, Samsung EVO 1TB, WD Black 3TB, WD Black 5TB    PSU Corsair CX750M    Cooling Cryorig H7 with NF-A12x25

Link to comment
Share on other sites

Link to post
Share on other sites

Nice bandwidth, specially useful for those LLMs. Too bad nvidia won't increase the vram on the top tier, but makes sense given they don't want to cannibalize their enterprise offerings.

 

When the 6000 series are out I'll see if I can grab a pair of 5090s lol

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, igormp said:

Nice bandwidth, specially useful for those LLMs. Too bad nvidia won't increase the vram on the top tier, but makes sense given they don't want to cannibalize their enterprise offerings.

 

When the 6000 series are out I'll see if I can grab a pair of 5090s lol

Exactly. Why have a grater than 24GB card for 2499, when you can institute the data center tax and have the same die, but with 40GB for 12,999.

Link to comment
Share on other sites

Link to post
Share on other sites

if it actually releases in 2025 I wonder if they will use 3nm or go straight to 2nm.

mY sYsTeM iS Not pErfoRmInG aS gOOd As I sAW oN yOuTuBe. WhA t IS a GoOd FaN CuRVe??!!? wHat aRe tEh GoOd OvERclok SeTTinGS FoR My CaRd??  HoW CaN I foRcE my GpU to uSe 1o0%? BuT WiLL i HaVE Bo0tllEnEcKs? RyZEN dOeS NoT peRfORm BetTer wItH HiGhER sPEED RaM!!dId i WiN teH SiLiCON LotTerrYyOu ShoUlD dEsHrOuD uR GPUmy SYstEm iS UNDerPerforMiNg iN WarzONEcan mY Pc Run WiNdOwS 11 ?woUld BaKInG MY GRaPHics card fIX it? MultimETeR TeSTiNG!! aMd'S GpU DrIvErS aRe as goOD aS NviDia's YOU SHoUlD oVERCloCk yOUR ramS To 5000C18

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Levent said:

if it actually releases in 2025 I wonder if they will use 3nm or go straight to 2nm.

It's supposed to be released next year, not 2025.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, igormp said:

It's supposed to be released next year, not 2025.

Chart seems to indicate 2025, either way if its 2024 for sure then it locks it to 3nm.

 

mY sYsTeM iS Not pErfoRmInG aS gOOd As I sAW oN yOuTuBe. WhA t IS a GoOd FaN CuRVe??!!? wHat aRe tEh GoOd OvERclok SeTTinGS FoR My CaRd??  HoW CaN I foRcE my GpU to uSe 1o0%? BuT WiLL i HaVE Bo0tllEnEcKs? RyZEN dOeS NoT peRfORm BetTer wItH HiGhER sPEED RaM!!dId i WiN teH SiLiCON LotTerrYyOu ShoUlD dEsHrOuD uR GPUmy SYstEm iS UNDerPerforMiNg iN WarzONEcan mY Pc Run WiNdOwS 11 ?woUld BaKInG MY GRaPHics card fIX it? MultimETeR TeSTiNG!! aMd'S GpU DrIvErS aRe as goOD aS NviDia's YOU SHoUlD oVERCloCk yOUR ramS To 5000C18

 

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, igormp said:

It's supposed to be released next year, not 2025.

The reason super is happening again is super is supposed to last through 2024. We have known blackwell changed Nvidia's cadence for a while now. 
Perhaps the 5090 can launch in Q4 of 2024, but nothing humanly buyable will. 

GDDR7 launches in 2024. 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, starsmine said:

The reason super is happening again is super is supposed to last through 2024. We have known blackwell changed Nvidia's cadence for a while now. 
Perhaps the 5090 can launch in Q4 of 2024, but nothing humanly buyable will. 

GDDR7 launches in 2024. 

would be cool, 2025 sounds like a good time to make a new built to me, and I'll have enough time to save up (the gazillions it'll cost ._.)

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

384bit doesn't sound a lot, cause I'm looking at my old HD2900XT and the sticker on it says 512bit. These says new GPUs, lacks excitement, it's just a bit faster than previous gen and it cost your entire retirement savings.

Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, NumLock21 said:

384bit doesn't sound a lot, cause I'm looking at my old HD2900XT and the sticker on it says 512bit. These says new GPUs, lacks excitement, it's just a bit faster than previous gen and it cost your entire retirement savings.

RTX 4090 is also 384-bit so it's not even a bit faster.

 

Remember AMD Vega with HBM? Memory bandwidth is not everything, also compression exists to compensate.

 

That said, I'll wait till 6000 series that will be the same performance and price as 5000 series but with features exclusive to 6000 series.

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, NumLock21 said:

384bit doesn't sound a lot, cause I'm looking at my old HD2900XT and the sticker on it says 512bit. These says new GPUs, lacks excitement, it's just a bit faster than previous gen and it cost your entire retirement savings.

Bus bandwidth is related to how many chips you can add, there's a physical limit to the amount of chips you can clump into a PCB before you need to go with fancy interposers (think HBM). More chips = more memory and more stuff going in in parallel = more bandwidth.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, NumLock21 said:

384bit doesn't sound a lot, cause I'm looking at my old HD2900XT and the sticker on it says 512bit. These says new GPUs, lacks excitement, it's just a bit faster than previous gen and it cost your entire retirement savings.

Memory bit width doesnt really scale through nodes, in fact analog scaling is 100% dead. 
comparing it to old gens aint great because its bandwidth that is important to these chips, not buswidth.

Going wider will take up a lot of die space, a lot of PCB space, and if the core cant take advantage of the bandwith, then its better to just build a larger core.

The massive increase in bandwidth vs the previous gen can leverage faster memory like GDDR7 via PAM-3 encoding (3bits per two clock, then doubled because DDR), so it's like Triple Data Rate but technically not. 

the 4090 with its 384bit bus got 1008GB/s of bandwidth,
The 5090 with its 384bit bus with launch speed GDDR7 will have 1536GB/S

That is more then a bit faster. @WereCat

I just hope all the chips down range has a bump in width for enough chips. 
but it would be sad to have the 60 series again at 12GB rather than 16 GB. 
 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, WereCat said:

RTX 4090 is also 384-bit so it's not even a bit faster.

 

Remember AMD Vega with HBM? Memory bandwidth is not everything, also compression exists to compensate.

 

That said, I'll wait till 6000 series that will be the same performance and price as 5000 series but with features exclusive to 6000 series.

I'll wait until my current gpu is no longer working, and then grab the latest gpu offering of that time.

 

2 hours ago, igormp said:

Bus bandwidth is related to how many chips you can add, there's a physical limit to the amount of chips you can clump into a PCB before you need to go with fancy interposers (think HBM). More chips = more memory and more stuff going in in parallel = more bandwidth.

AMD went with fancy interposers aka HBM and their card didn't really WOW me that much.

 

1 hour ago, starsmine said:

Memory bit width doesnt really scale through nodes, in fact analog scaling is 100% dead. 
comparing it to old gens aint great because its bandwidth that is important to these chips, not buswidth.

Going wider will take up a lot of die space, a lot of PCB space, and if the core cant take advantage of the bandwith, then its better to just build a larger core.

The massive increase in bandwidth vs the previous gen can leverage faster memory like GDDR7 via PAM-3 encoding (3bits per two clock, then doubled because DDR), so it's like Triple Data Rate but technically not. 
 

Total Bandwidth > Bits and whatever else they have

The point for me to bring up my old 512bit card isn't about performance, cause any new current entry level GPU will be much faster than my old HD2900XT. It's more about how these tech news sites needs to come up with better titles.

Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Levent said:

if it actually releases in 2025 I wonder if they will use 3nm or go straight to 2nm.

Also, not a chance. 
3nm can still do monolithic (TSMC only)
2nm rhetical limit is half. as TSMC moves to gafeet. Monolithic as we know it will be dead. 
Unless blackwell is chiplet based, 2nm is a non starter for the high end. 

3nm Samsung is Gafeet. 

I have not yet heard credible leaks about the time table of RTX moving to chiplet. We know blackwell next will be chiplet. We dont know about blackwell yet (or I dont). 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, NumLock21 said:

AMD went with fancy interposers aka HBM and their card didn't really WOW me that much.

 

As said in the post above yours, throwing more bandwidth in a chip that won't fully make use of it is somewhat meaningless. High-end data center products still use HBM, and there are appliations nowadays that are really starved bandwidth-wise, even on a 4090.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, igormp said:

As said in the post above yours, throwing more bandwidth in a chip that won't fully make use of it is somewhat meaningless. High-end data center products still use HBM, and there are appliations nowadays that are really starved bandwidth-wise, even on a 4090.

512bit was also really complicated and hard to do, it was done out of necessity due to the GDDR technology at the time. And only by AMD too, since Nvidia was utilizing effective memory compression at the time lowering raw bandwidth required.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, starsmine said:

3nm can still do monolithic (TSMC only)

For the rumoured early 2025 release a variation of 3nm is the likely choice. It'll be pretty refined by that point and well positioned for volume without leading edge prices. 2nm isn't expected until late 2025 and might not be high volume ready for some time. And probably not affordable unless you're Apple.

 

3 hours ago, starsmine said:

2nm rhetical limit is half. as TSMC moves to gafeet. Monolithic as we know it will be dead. 

I had to look that up since I don't follow it that closely. I found:

Quote

Current i193 and EUV lithography steppers have a maximum field size of 26 mm by 33 mm or 858 mm². In future High-NA EUV lithography steppers the reticle limit will be halved to 26 mm by 16,5 mm or 429 mm² due to the use of an amorphous lens array.

https://en.wikichip.org/wiki/mask#:~:text=Reticle limit[edit],of an amorphous lens array.

 

For scale, AD103 is 379mm2. AD102 is 609mm2. Of current GPU sizes, 4080 equivalent and below would still be ok monolithic on high-NA nodes, 4090 equivalent would not.

 

1 hour ago, leadeater said:

512bit was also really complicated and hard to do, it was done out of necessity due to the GDDR technology at the time. And only by AMD too, since Nvidia was utilizing effective memory compression at the time lowering raw bandwidth required.

Since RDNA2 we might also have to consider bigger caches into the mix. Below chart is a modified AMD one to show more info, along with a table of how AMD calculated effective bandwidth from adding the cache.

 

image.thumb.png.8edf9c7ee11b767185ef2afce1fe063c.png

 

image.thumb.png.c65b5deddd6e86377f75f8ee4ae31c15.png

 

Source https://twitter.com/Locuza_/status/1405421774188298241

Worth looking at that thread for more insights.

 

 

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Levent said:

if it actually releases in 2025 I wonder if they will use 3nm or go straight to 2nm.

Just a shot in the dark, but I'd imagine TSMC N3 of some flavor.

N2 capacity will likely be completely consumed by Apple for a while after introduction, and I feel like Nvidia hasn't made attempts to get onto the newest nodes with their releases in the last few generations, at least for their gaming products.

Link to comment
Share on other sites

Link to post
Share on other sites

I call bullshit. AI regulation will gimp the fuck out of this card for anything than can brew personal GPT LLMs off the grid.

Link to comment
Share on other sites

Link to post
Share on other sites

And it can be ours, for the extraordinary low price of $2499.99 for an RTX 5080.

CPU: AMD Ryzen 3700x / GPU: Asus Radeon RX 6750XT OC 12GB / RAM: Corsair Vengeance LPX 2x8GB DDR4-3200
MOBO: MSI B450m Gaming Plus / NVME: Corsair MP510 240GB / Case: TT Core v21 / PSU: Seasonic 750W / OS: Win 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, porina said:
9 hours ago, starsmine said:

2nm rhetical limit is half. as TSMC moves to gafeet. Monolithic as we know it will be dead. 

I had to look that up since I don't follow it that closely. I found:

Quote

Current i193 and EUV lithography steppers have a maximum field size of 26 mm by 33 mm or 858 mm². In future High-NA EUV lithography steppers the reticle limit will be halved to 26 mm by 16,5 mm or 429 mm² due to the use of an amorphous lens array.

https://en.wikichip.org/wiki/mask#:~:text=Reticle limit[edit],of an amorphous lens array.

 

For scale, AD103 is 379mm2. AD102 is 609mm2. Of current GPU sizes, 4080 equivalent and below would still be ok monolithic on high-NA nodes, 4090 equivalent would not.

You are not talking about a hard limit, but a soft limit. You can do monolithic designs by simply using several fields. Cerebras makes wafer-sized monolithic processors.

However, you probably need a proper "shear line" and I'm unsure what the tolerance needs to be so both masks overlap properly - this mainly depends on the precision of positioning.

This not only adds another failure mode (reducing yield), but also additional costs for the mask and the manufacturing process.

A silicone interposer is also not cheap nor easy to make. Some manufacturers might still favour monolithic designs even with the drawbacks of this method.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, NumLock21 said:

384bit doesn't sound a lot, cause I'm looking at my old HD2900XT and the sticker on it says 512bit. These says new GPUs, lacks excitement, it's just a bit faster than previous gen and it cost your entire retirement savings.

13 hours ago, WereCat said:

RTX 4090 is also 384-bit so it's not even a bit faster.

 

Remember AMD Vega with HBM? Memory bandwidth is not everything, also compression exists to compensate.

It's not only important to remember that memory bandwidth isn't everything, the memory bus width isn't everything either.

 

The HD 2900 XT might have had a 512-bit bus, but it could only achieve 106GB/s of bandwidth.

This card is rumored to be able to push over 1500GB/s. So it's 15 times faster despite having a less wide bus. And that's without counting the modern texture compression that increases the effective memory bandwidth in applications like games. For example, Nvidia upgraded their data color compression in the RTX 20 series so that a game like GTA 5 required ~20-25% less VRAM to run.

 

We can't just look at the memory bandwidth and think that's enough to make judgment calls on. That's in a way like looking at the latest Ryzen 7000 processor and going "huh, still only 64bit? Guess CPUs haven't progressed in the last 30 years since my Nintendo 64 also has a 64-bit processor".

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, HenrySalayne said:

You are not talking about a hard limit, but a soft limit. You can do monolithic designs by simply using several fields.

Part of my point was that even with the smaller reticle size, the majority of consumer tier GPUs would still fit within that. Only the top tier part would exceed it and require further effort. Given their typical pricing, I think they can afford that.

 

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/15/2023 at 4:41 PM, TVwazhere said:

Okay but Blackwell is a strong architecture name. 

Could make for a good band name.

Make sure to quote or tag people, so they get notified.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×