Jump to content

Microsoft Unveils Their Own (Data Center) CPU and NPU Chips - Cobalt 100 and Maia 100

LAwLz

I'm not gonna lie, I'm getting kinda worried with how big Microsoft is getting. It's starting to feel like they are in every facet of life and you can't get away from them. 

 

Also they are moving towards owning everything, like pretty soon you won't even own your computer anymore.

"If a Lobster is a fish because it moves by jumping, then a kangaroo is a bird" - Admiral Paulo de Castro Moreira da Silva

"There is nothing more difficult than fixing something that isn't all the way broken yet." - Author Unknown

Spoiler

Intel Core i7-3960X @ 4.6 GHz - Asus P9X79WS/IPMI - 12GB DDR3-1600 quad-channel - EVGA GTX 1080ti SC - Fractal Design Define R5 - 500GB Crucial MX200 - NH-D15 - Logitech G710+ - Mionix Naos 7000 - Sennheiser PC350 w/Topping VX-1

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, bcredeur97 said:

I'm not gonna lie, I'm getting kinda worried with how big Microsoft is getting. It's starting to feel like they are in every facet of life and you can't get away from them. 

 

Also they are moving towards owning everything, like pretty soon you won't even own your computer anymore.

I agree. Too big and in too many places. Silver lining is that when MS, Amazon, Alphabet and others try to keep growing they have to compete with each other. More competition is always better. The big tech has been growing in their own segments without too much real competition that cannot be bought off.

Still. I'd rather see smaller companies. Microsoft has three to four times the revenue that the country I happen to live in.

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Arika said:

i dont think i will ever understand the tech-boner people have for RISC-V.

There is a lot of hype around it, but fundamentally it does have things going for it. It just needs to be kept in perspective.

 

11 hours ago, Forbidden Wafer said:

Because it is license free.

The ISA is free and royalty free, which is nice. But you still have to design a core around the ISA, and get it made. These are still no small tasks, and in part is why I think we've mainly seen embedded controller type application for it, as they're relatively simple compared to a desktop/mobile class high performance CPU core. If someone were to make a high performance CPU out of it to compete against desktops/laptops/mobiles, they'll still be charging for their efforts. The license cost saving is only one part of that overall cost.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, porina said:

The license cost saving is only one part of that overall cost.

Also Arm licensing is rather reasonable for the most part in the grand scheme of things. 

 

Like yes, it isn't zero, but you get what you pay for, Arm offers technical support on a level most wouldn't even imagine 

My Folding Stats - Join the fight against COVID-19 with FOLDING! - If someone has helped you out on the forum don't forget to give them a reaction to say thank you!

 

The only true wisdom is in knowing you know nothing. - Socrates
 

Please put as much effort into your question as you expect me to put into answering it. 

 

  • CPU
    Ryzen 9 5950X
  • Motherboard
    Gigabyte Aorus GA-AX370-GAMING 5
  • RAM
    32GB DDR4 3200
  • GPU
    Inno3D 4070 Ti
  • Case
    Cooler Master - MasterCase H500P
  • Storage
    Western Digital Black 250GB, Seagate BarraCuda 1TB x2
  • PSU
    EVGA Supernova 1000w 
  • Display(s)
    Lenovo L29w-30 29 Inch UltraWide Full HD, BenQ - XL2430(portrait), Dell P2311Hb(portrait)
  • Cooling
    MasterLiquid Lite 240
Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Jinchu said:

I agree. Too big and in too many places. Silver lining is that when MS, Amazon, Alphabet and others try to keep growing they have to compete with each other. More competition is always better. The big tech has been growing in their own segments without too much real competition that cannot be bought off.

Still. I'd rather see smaller companies. Microsoft has three to four times the revenue that the country I happen to live in.

In a perfect world, every city would have some sort of local cloud vendor, or if you're in a small town, maybe there's a cloud vendor in a nearby city. They run a "mini" datacenter to service their local area.

And you could use them for everything. They would work with another small vendor to replicate your data to another city for redundancy/backups. And the plus is since they're local -- you'll always have fast access to your stuff. The network packets don't have to go far.

The problem I find is someone has to write the software to do all of this (and do it well), and well... Microsoft does that.. so they end up controlling everything still lol
Opensource is so behind in this area. You can't just piece things together and become  a competitive cloud provider out of your house to start something. You gotta pay the big bucks for the special sauce software up front, which you can't afford. 

"If a Lobster is a fish because it moves by jumping, then a kangaroo is a bird" - Admiral Paulo de Castro Moreira da Silva

"There is nothing more difficult than fixing something that isn't all the way broken yet." - Author Unknown

Spoiler

Intel Core i7-3960X @ 4.6 GHz - Asus P9X79WS/IPMI - 12GB DDR3-1600 quad-channel - EVGA GTX 1080ti SC - Fractal Design Define R5 - 500GB Crucial MX200 - NH-D15 - Logitech G710+ - Mionix Naos 7000 - Sennheiser PC350 w/Topping VX-1

Link to comment
Share on other sites

Link to post
Share on other sites

I have a question. How will this affect me (Xbox user) If at all. When I hear about microsoft data center stuff, I think its reasonable that it would effect xbox stuff, ye?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, BrandonLatzig said:

I have a question. How will this affect me (Xbox user) If at all. When I hear about microsoft data center stuff, I think its reasonable that it would effect xbox stuff, ye?

this stuff? no. i doubt any of this will be anywhere near xbox for the foreseeable future

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Arika said:

this stuff? no. i doubt any of this will be anywhere near xbox for the foreseeable future

In a roundabout way, it might. I could see cloud delivered content such as AI generated NPCs (not just cookie-cutter characters, but legit unique and individual looking and acting billion+ NPCs) in addition to AI procedural generated world maps. All created in the cloud, and delivered as DLC to the console.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, StDragon said:

In a roundabout way, it might. I could see cloud delivered content such as AI generated NPCs (not just cookie-cutter characters, but legit unique and individual looking and acting billion+ NPCs) in addition to AI procedural generated world maps. All created in the cloud, and delivered as DLC to the console.

Yes, but we are such a long way away from that. And I personally do not wish to see this happen because it’s just pushing more into games as a service where nothing is done on your own machine and when the developers decide to no longer pay for the processing power, your game is now useless and unplayable.

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, bcredeur97 said:

And you could use them for everything. They would work with another small vendor to replicate your data to another city for redundancy/backups. And the plus is since they're local -- you'll always have fast access to your stuff. The network packets don't have to go far.

You'd fundamentally have to change the entire internet infrastructure to do that because it's actually not how it works sadly. Geographically close datacenters aren't always, almost never, actually close network path wise unless that is also where your connection is terminated to.

 

For example most connections here in NZ in the North Island terminate in Auckland and then IP routing etc happen, so if you are 800 km away and want to talk to your local town/city datacenter or even your neighbor you are round tripping 1600km for a below 50km geographic distance. 

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, leadeater said:

For example most connections here in NZ in the North Island terminate in Auckland and then IP routing etc happen, so if you are 800 km away and want to talk to your local town/city datacenter or even your neighbor you are round tripping 1600km for a below 50km geographic distance. 

Maybe I’m dumb and misunderstanding how packets work, but wouldn’t the difference of 50km to 1600km only be 1ms vs 8ms? (Rounding up)

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Arika said:

Maybe I’m dumb and misunderstanding how packets work, but wouldn’t the difference of 50km to 1600km only be 1ms vs 8ms? (Rounding up)

Ah yes 🙃

 

1600km or 1000m round trip without underlying switching or routing overhead is 21ms (that's based on speed through actual fibre cables).

 

Stuff designed for the internet is pretty well fine with 21ms though, but that's also why the idea of all these super close datacenters isn't necessary unless you want greater than 10Gbps throughput or want to do something you probably shouldn't be doing over the internet like high frame rate and high fidelity remote PC streaming/VDI. 

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, leadeater said:

You'd fundamentally have to change the entire internet infrastructure to do that because it's actually not how it works sadly. Geographically close datacenters aren't always, almost never, actually close network path wise unless that is also where your connection is terminated to.

 

For example most connections here in NZ in the North Island terminate in Auckland and then IP routing etc happen, so if you are 800 km away and want to talk to your local town/city datacenter or even your neighbor you are round tripping 1600km for a below 50km geographic distance. 

I think you misunderstood what bcredeur said.

The way I interpreted their post was that the networking would be done in the same city too.

 

This is actually how the company I work for do things. We have a data center in a city, and we are working with a local company that owns a bunch of fiber in the city. We can just call them and go "hey, we want a direct dark fiber connection between these two addresses".

 

Also, correct me if I am wrong (because I don't work a lot with Azure), but isn't it possible to choose how your traffic gets routed with that? By default, it goes through the Azure backbone, but I am fairly sure that you can select a way where Microsoft dumps the traffic out from their datacenter as quickly as possible, which should be at the data center or at the very least very near it.

 

But chances are it wouldn't make much of a difference for most customers. All the network processing adds more latency than the geographical distance anyway, at least in most cases.

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, LAwLz said:

The way I interpreted their post was that the networking would be done in the same city too.

Which would mean fundamentally changing how the internet is done then... right?

 

Today as it is you can have an internet exchange in your city, I do, but that's not where your home connection may actually go to, like mine which goes to Auckland BEFORE anywhere else. So my ping to my local internet exchange is 13ms and my ping to Auckland is 9ms, I'm not even slightly close to Auckland.

 

In fact my internet connection goes past at least 3 internet exchanges on the way to Auckland.

 

And just to make it clear my connection is encapsulated and MPLS label switched to Auckland.

 

Anyway the main point was that the proposed situation is an idealistic view and doesn't align all to well with how things are done today, and such a change isn't actually that minor.

 

34 minutes ago, LAwLz said:

We can just call them and go "hey, we want a direct dark fiber connection between these two addresses".

But that's not how residential internet connections are done though.

 

34 minutes ago, LAwLz said:

Also, correct me if I am wrong (because I don't work a lot with Azure), but isn't it possible to choose how your traffic gets routed with that?

Sort of, Microsoft owns and operates it's own network and has presence in most internet exchanges so often the best connectivity choice in to Azure is to not pay for ExpressRoute as that service endpoint could be further away than the closest entry node in to the Microsoft network.

 

Probably not much consideration in EU and US but here in NZ ExpressRoute Microsoft does not recommend. We just use multiple S2S VPN connections from each datacenter in to Azure, most of our traffic does not go via this e.g. O365. When the Azure region is finished getting deployed in NZ then ExpressRoute can actually be effectively used.

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, Arika said:

Yes, but we are such a long way away from that. And I personally do not wish to see this happen because it’s just pushing more into games as a service where nothing is done on your own machine and when the developers decide to no longer pay for the processing power, your game is now useless and unplayable.

The capability exists today. Depending on the desired outcome of complexity, you can farm content creation that takes anywhere from minutes to weeks or longer. For example in a space MMO, you could have new planet world maps generated on the back-end, and stream the content for local caching on the PC/Console.

 

For indi games, an individual could run an LLM locally to generate content as well. Maybe not nearly as complex and feature-rich (such as calculating every branch on a tree or placement of rocks), but "good enough" in a balance between local compute capability and expected results. While newer CPUs have NPUs, leveraging the GPU to generate game content while not playing would make more sense. I can imagine future games like Valheim where users can host their own worlds. Ditto for D&D types where dungeons could be generated for campaigns hosted by a GM.

AI is already a fantastic tool to do the heavy lifting, but the results often needs tuning with a human touch. And depending on how much Microsoft will charge to lease out this NPU capability, it might be cheaper than creating content locally depending on the complexity you're wanting and the HW needed to do it.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, leadeater said:

But that's not how residential internet connections are done though.

I don't think the person you replied to was talking about residential Internet connections. Or who knows, maybe they were.

I interpreted their post as more of a business-to-business kind of proposal. 

 

Of course, if they were talking about some residential person accessing Azure resources then what you are saying is correct. The traffic will jump about a lot. But if we are talking about business-to-business type of situations then it is possible to ensure that traffic gets routed in an optimal way.

But as I said before, those low latencies are not really beneficial except in very specific scenarios. So while it might seem like a benefit, it probably won't be in any meaningful way.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, LAwLz said:

I don't think the person you replied to was talking about residential Internet connections. Or who knows, maybe they were.

I interpreted their post as more of a business-to-business kind of proposal. 

Mmm yea true. All the lower end business connections are done the same way I mention here, a lot of other places too. My home internet connection is a business plan but is using the same GPON infrastructure used for residential connections and the difference is pretty much just around committed capacity, contention ratios, allowed additional subnets and 24/7 support.

 

Fairly consistently most internet connections are switched through to a more central point for processing and where traffic is most likely to go anyway i.e. where Google cache servers and Netflix etc are, stuff like that.

 

I responded the way I did because if we are talking about lots of small datacenters in towns and small cities then it's probably going to be servicing majority of smaller business and edge users are likely to also be residential customers. I'd like internet traffic to be processed closer to the "end mile" but for the most part it's kinda wasteful and unnecessary to do that and we both agree ~10ms vs ~20ms doesn't really matter. It would also increase the cost of internet service as well.

 

7 hours ago, LAwLz said:

Of course, if they were talking about some residential person accessing Azure resources then what you are saying is correct.

It's true regardless of who, wherever and however you are accessing Azure has to go through a Microsoft entry node and you're better off doing this over standard internet and that is actually Microsoft's advice around that.

 

Azure ExpressRoute has to go through a connectivity provider which may, likely isn't on balance, as close as the closest Microsoft entry node. ExpressRoute is actually more about bandwidth cost than anything else, here is a decent enough article on it https://www.megaport.com/blog/use-expressroute-local-for-azure-private-peering/

 

ExpressRoute Direct is the highest end offering, dual 100Gbps, few actually need that and you're probably in the same facility or extremely close anyway (with direct fibre between then at the provider levels).

 

ExpressRoute isn't a directly delivered Microsoft service, it's done through partners which is why I'm pointing to the best possible path conundrum, it's not always best and only the highest service offerings even allow things like Office 365 connections to go through it.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×