Jump to content

The last seven Mac quarters have now been the top seven quarters ever in the history of the Mac-- and 50% of Mac purchases were first timers

Obioban
19 minutes ago, LAwLz said:

Do they press down on the touch pad while dragging their finger? 

Yep. It’s horrible. I genuinely don’t understand why three finger drag isn’t enabled by default. 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, leadeater said:

Always found that funding model weird. Here ours are funded based on rolling count and then weighted on decile (avg socioeconomic rating of the families). That means a school with 800 and a decile of 3 getting more funding than a school of 800 with a decile rating of 7. School servicing lower wealth families get more funding. There's also extra funding attached to students with developmental issues etc.

Yeah I think the funding is just something that was created at a time when the people in power cared little about poor people and its taking awhile to fix the issue. I know more states have started to move away from that funding scheme but it's still taking awhile and local property taxes are still part of the budget for the most part its just that they have funding from other areas as well to combat the huge disparity in funding between rich areas and poor areas. It would be cool to see more schools able to afford things like iPads as I know some schools that use them and it seems to work great. 

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Brooksie359 said:

at a time when the people in power cared little about poor people and its taking awhile to fix the issue.

Try since the beginning of Human existence. Whether that be 10,000 years or 300 million, "a while" is an understatement.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, LAwLz said:

 

Oh, I had no idea. Wish I had known that earlier. 

No idea how people can live without it. How do people drag things without those settings enabled? Do they press down on the touch pad while dragging their finger? 

Yes, just like a mouse 😛

 

I suspect some of this comes down to how you learned to use a trackpad. When I started using laptops, it was a trackball with a button below it, so obviously the button was held. When I started using a trackpad, there was a physical button below it-- so pressing that with your thumb while dragging with your fingers was an exact recreation of the mouse/trackball gesture. The button went away over time, but the gesture still functions exactly the same way.

 

If you didn't go into the trackpad prefs, that's probably why you weren't impressed by the trackpad-- it has a ton of gestures that can really speed up operations, that you're not going to stumble into just using it.

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, DANK_AS_gay said:

It's because Minecraft is a largely single-core application, and Apple Silicon has especially good single core performance. Unless your pc has 12th gen Intel, your M1 will walk on it in terms of single core performance (pcmark.com).

In addition to that, Minecraft is Java, which is platform agnostic to a degree(WORA? More like write once debug everywhere endlessly)

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, LAwLz said:

I had the pleasure of using the Macbook Pro with the M1 Pro a while ago, and after having used it I am not surprised that people are flocking to them.

I am not a fan of MacOS so that was a bit detrimental to the experience, but the hardware is fantastic. I am used to premium Windows laptops so it's not like I am comparing a 300 dollar Windows laptop to a 2000+ dollar Mac either.

 

Fantastic performance (benchmarks don't do it justice).

Great speakers.

Fantastic webcam.

Amazing display.

 

I think the touchpad is overhyped though. It's good, but I have had equally good ones on Windows laptops. I also missed the "touch click" or whatever you call it.

It was also way heavier than I imagined. The lack of chamfered edges also made it seem way thicker than it actually was.

 

If it wasn't for my hatred for MacOS, and the lack of support for x86 VMs, then it would be my next laptop for sure. 

You're right on trackpads... there are some Windows laptop makers who know how to do them properly. The issues are more consistency (there are still plenty of junk pads, even in expensive models) and pairing that with an equally great keyboard.

 

You can customize Macs to enable tapping as clicks, if that's what you're talking about; it's just not on by default. Apple provides a fair amount of customizability for trackpad behavior, which is saying something for a company well-known for insisting on specific interface methods.

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/1/2022 at 3:54 AM, leadeater said:

As @Blade of Grass mentioned, if it were so simple to do the same thing on x86 then it would be being done, problem is it's not as simple, not that it's simple on ARM either but it's definitely not as hard as x86.

In the quoted post you yourself described a variety of microarchitectural improvements that make the firestorm cores faster and more energy efficient, independent from the tech node scaling. Changes that are hard to pull off both on x86 and ARM and currently only Apple managed to do so. So I really don't get why you keep repeating the claim the raw core performance only being due to tech node shrinking. It's just not true. The core design space offers a ton of levers and tradeoff that can be pulled through clever engineering.

 

Which Apple clearly is capable of doing, partly due to them massively acquiring brain power from AMD several years ago when probably the foundation of the M1 was laid.

 

Also keep in mind that the M1 can be beaten in multi-core performance, but only at significantly worse energy efficiency (J/op or J/task) which can't be explained through the larger tech nodes of those competitors.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Dracarris said:

In the quoted post you yourself described a variety of microarchitectural improvements that make the firestorm cores faster and more energy efficient, independent from the tech node scaling. Changes that are hard to pull off both on x86 and ARM and currently only Apple managed to do so. So I really don't get why you keep repeating the claim the raw core performance only being due to tech node shrinking. It's just not true. The core design space offers a ton of levers and tradeoff that can be pulled through clever engineering.

Because that part of it literally only matters for 1T and not for anything else. If I need to do anything that scales at all with a decent number of threads then that part of Firestorm that is better in that way doesn't matter.

 

Plus I never said only, well it is because you can only scale out a design on an also more dense node too. It's not one or the other situation.

 

And also remember these SoCs are huge, similar simply is not possible to be supplied. Apple isn't any more technically capable than anyone else, they are just designing a specific set of SoCs for a specific family of products with a known retail cost and a near guaranteed sales volume. Yea Intel or AMD could do similar but who is going to buy it? What about everything else they need to do? Does AMD give up all their TSMC capacity to build SoCs that nobody is actually going to purchase in volume and lose money on it?

 

If single thread performance was the only important thing to everyone ever always then I'd probably be saying something different, but it isn't.

 

2 hours ago, Dracarris said:

Also keep in mind that the M1 can be beaten in multi-core performance, but only at significantly worse energy efficiency (J/op or J/task) which can't be explained through the larger tech nodes of those competitors.

Zen 3 scaled out in cores is no worse than a node behind. Take for example 511.povray_r, ~33% difference in perf power efficiency, not at all above the difference between gen on gen improvement when paired with node shrink.

 

The problem with talking about power efficiency here is that for Intel nor AMD is it some fixed thing and is actually vastly different across the product stacks, even on the same target platforms, based on things like number of cores. There's an equally large difference between laptop vs desktop, or even laptop HX vs HS etc etc. You can pick and choose whatever you want to compare against and make nearly any point you want, that's actually the advantage of the M1 family again, it's all the same.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

Plus I never said only, well it is because you can only scale out a design on an also more dense node too. It's not one or the other situation.

I can't make any sense out of that sentence. And the inprovements on M1 are both due to tech node scaling and architectural improvements.

1 hour ago, leadeater said:

Because that part of it literally only matters for 1T and not for anything else. If I need to do anything that scales at all with a decent number of threads then that part of Firestorm that is better in that way doesn't matter.

No. Several improvements of the microarch help pushing multi-core IPC. Back when the M1 was fresh Anandtech had a long and detailed article about those.

1 hour ago, leadeater said:

Apple isn't any more technically capable than anyone else, they are just designing a specific set of SoCs for a specific family of products with a known retail cost and a near guaranteed sales volume.

That's a really lame excuse. Apple has hired a truckload of highly capable engineers and designers, sometimes directly headhunted from Intel and AMD. "Technically capable" is the wrong term for the skill of the engineers and designers. And yes, with enough invest into human resources and R&D money you can actually pull off better design than your competition.

1 hour ago, leadeater said:

Does AMD give up all their TSMC capacity to build SoCs that nobody is actually going to purchase in volume and lose money on it?

Just because AMD couldn't sell such an SoC doesn't mean that they wouldn't be interested in and benefit from many of the gains, tricks and arch aspects that Apple did.

1 hour ago, leadeater said:

Zen 3 scaled out in cores is no worse than a node behind. Take for example 511.povray_r, ~33% difference in perf power efficiency, not at all above the difference between gen on gen improvement when paired with node shrink.

One other archi, one benchmark, against a very very broad spectrum of platforms, including desktop, mobile, and laptop, and range of benchmarks where energy efficiency speaks a very clear language in favor of the M1.

1 hour ago, leadeater said:

You can pick and choose whatever you want to compare against and make nearly any point you want, that's actually the advantage of the M1 family again, it's all the same.

Yes and against anything but the very low-performance end the M1 leaps ahead in energy efficiency. I don't get why this is still so hard for some people to admit.

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, Dracarris said:

I can't make any sense out of that sentence. And the inprovements on M1 are both due to tech node scaling and architectural improvements.

Correct, however quite often large archecturual improvements need to be done on a node shrink otherwise the core design gets too large and therefore so does the die. It's why Intel liked tick tock so much.

 

The point was I've not said the gains are only from the TSMC node to node gains they publish.

 

22 minutes ago, Dracarris said:

No. Several improvements of the microarch help pushing multi-core IPC. Back when the M1 was fresh Anandtech had a long and detailed article about those.

For what I talked about and you quoted it's only an important factor for single thread not multi thread.

 

22 minutes ago, Dracarris said:

That's a really lame excuse

It's not an excuse lol. Don't hate on facts.

 

22 minutes ago, Dracarris said:

One other archi, one benchmark, against a very very broad spectrum of platforms, including desktop, mobile, and laptop, and range of benchmarks where energy efficiency speaks a very clear language in favor of the M1.

I could go through all the SPEC2017 sub scores if you really want, spoiler alert it's going to get very repetitive. 

 

22 minutes ago, Dracarris said:

Yes and against anything but the very low-performance end the M1 leaps ahead in energy efficiency.

Ryzen Mobile 6000 does just fine actually. So does every Zen 3 product not designed to push power. Like I said you can get real pick and choose if you want to suit whatever narrative you want to push, doesn't make every argument doing that all that valid, in either direction.

 

22 minutes ago, Dracarris said:

I don't get why this is still so hard for some people to admit.

Because it's not, no matter how many people and how many times ya want to say it 🙃

 

All the huge performance gains in real world applications come from other aspects, not that the Firestorm cores are leaps ahead.

 

Just being better or the best just isn't enough for some people I guess. Sorry if reality checks gets annoying, probably a sign you care too much about something you shouldn't.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, leadeater said:

Ryzen Mobile 6000 does just fine actually.

So bcs the Ryzen M 600 does "just fine" proves your point?

3 minutes ago, leadeater said:

Because it's not, not matter how many people and how many times ya want to say it 🙃

Yes it is, no matter how often you try to wash it off on the 5nm node. Unless you compare the full system power of an M1 machine against only the CPU/APU/package/whatever power, the M1 currently simply has the best energy efficiency.

4 minutes ago, leadeater said:

All the huge performance gains in real world applications come from other aspects, not that the Firestorm cores are leaps ahead.

I didn't claim huge performance gains, I was talking about energy efficiency.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Dracarris said:

So bcs the Ryzen M 600 does "just fine" proves your point?

Remind me and yourself what I originally said?

 

1 minute ago, Dracarris said:

Yes it is, no matter how often you try to wash it off on the 5nm node. Unless you compare the full system power of an M1 machine against only the CPU/APU/package/whatever power, the M1 currently simply has the best energy efficiency.

When did I say it didn't have this?

 

2 minutes ago, Dracarris said:

I didn't claim huge performance gains, I was talking about energy efficiency.

Yep which come from the performance gains. The large performance efficiencies that you can show in applications for real tasks come from the overall SoC and what Apple did with it, not the Firestorm cores arch being so much better. Not sure why this is difficult.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

@Dracarris

 

Statement: "M1 has the best performance efficiency on the market"

Me: Yes I agree

 

Statement: "M1 is miles ahead of everyone else in every way, AMD and Intel are a joke"

Me: Well hold up there

 

I'll only interject when people make stupid or silly claims, or overblow the actual advantages that are there. Is it really your problem that I tend to like to ground people in a bit more reality?

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, leadeater said:

Statement: "M1 is miles ahead of everyone else in every way, AMD and Intel are a joke"

we seem to have a different understanding of what I said.

 

I disagree with you in reference to energy efficiency and the reasons for it on the M1 but I frankly currently don't have the time and energy to engage in long quotation wars; so I'm afraid we'll have to agree to disagree.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Dracarris said:

we seem to have a different understanding of what I said.

Nope, you didn't say anything until you decided to join in. Hence that post was not a reference to anything at all you said and why I asked if it was even your problem. It's the reason why these discussions start, why they are necessary. Some people just aren't happy with the thing they like merely being the best in the market, they have to beat down the rest or trumpet it up no matter how right, wrong, ill-informed whatever it is they are saying. And you're acting similarly as if Intel isn't well..... literally bloody Intel. No Apple is not better at silicon engineering than they are, or silicon design, or have better engineers or talent. Both companies are absolutely full of the best people in the industry doing the best work in the industry that best suits their products and clients. If anyone actually is slightly behind in this aspect it's AMD.

 

Want to say otherwise, be my guest however do so without mere opinion only and come to the table with something to back it. And I don't mean back it with product examples either, if you are going to talk engineering capabilities and human resources then you'll need to show it with that. Few key people here or there moving around isn't changing this either, not even Jim Keller passed up the opportunity when getting labelled with such a thing to say the entire team make it happen not just him, he only gets the accreditation for the work due to his role and title.

 

 

2 hours ago, Dracarris said:

I disagree with you in reference to energy efficiency and the reasons for it on the M1 but I frankly currently don't have the time and energy to engage in long quotation wars; so I'm afraid we'll have to agree to disagree.

Apple is not doing anything either Intel or AMD could not do, capability and technical design wise. They as I said simply are not going to do something which is fiscally and economically non-viable that will compromise their business and products. Especially when a large factor of such a products success is reliant on a 3rd party external software company, Microsoft, and then on top of that software actually utilizing the advantages of such a product which will not happen, we know this with strong confidence. Microsoft and Intel can't even fully iron out hybrid CPUs and somehow throwing in unified memory logically and physically with CPU and GPU, throw in some beefy media encoders/decoders while at it, and that isn't going to make that worse? Then somehow a company like Adobe is going to fully and properly utilize such an SoC from day of it's release (or in a year, or 2 years..).

 

Yes it's such a brilliant idea to basically cut your product output by a factor of 10 to make something no hardware vendor wants other than maybe for a tiny fraction of products while totally ruining the entire industry outright in the process. You think supply issues are bad now, 

 

Also if you go past the first sentence of that other paragraph you quoted in the other post and also reference other times I've said it I specifically said you can basically pick and choose any AMD, or Intel CPU for that matter, to suit whatever narrative anyone wants no matter how absurd or valid. Like when is the legitimate actual time you or really anyone has cared about boost power limit or sustained power limits on high end desktops? Never? So then why would anyone reference those to show how much ahead the M1 family is in performance efficiency when those products are only the way they are by choice and design.

 

Hence why I actually used an AMD EPYC 7713 when I said the SPEC2017 performance efficiency difference was only 33%, because that specific SKU is performance efficiency optimized and also to see if you or anyone was actually doing any data checking at all because if you had and say run it against a Ryzen Mobile 69000HS it would not have been 33%. I really have little to no interest in opinions when making specific claims, those need to come with data, evidence and sources so they can be verified for accuracy and also applicability otherwise we're comparing server CPUs to laptop SoCs, or high power desktop CPUs to laptop SoCs and in archecturual contexts rather than product context for that matter too.

 

Thus for CPU archectures and ones that boost trying to prove how far ahead one is to another or not, or against one that doesn't boost at all, then how should this be best done? Do we pick best case, do we pick worse cast, do we pick from CPUs/SoCs only from within comparable products? Or do we do as always, pick whichever one best suits the point one wants to make irrespective of relevance?

 

How much is one more efficient than the other when at the product level it is 51% purely by choice of operating parameters?

 

If you pick the most optimal power and frequency envelope for TSMC 7nm factoring in the most efficient for the Zen 3 archecture to get the most optimal performance efficiency possible i.e. the 7713 (or there abouts) would this not be the most fair comparison based on the assumption this is what Apple did or close to for Firestorm cores in the SoCs if the discussion is about archectures not the actual products?

 

People seem to have, such as yourself, the twisted idea about just how far ahead Apple really is at the CPU archecture level than in reality they are. There's nothing here beyond a standard generational shift along with a node shrink. Intel have done these types of gains multiple times during node shrinks, AMD too (if we ignore Bulldozer heh). Also if we are going to believe current rumors Zen 4 is bringing around 40% more performance over Zen 3, do you want to have a guess who would be the most performance efficient in multi-thread workloads for pure CPU if this is true?

 

You don't need to go through any long quotes at all, if you want to address specifically the multi-thread performance efficiency then you're quite welcome to present your case with data and sources. And if your case is on the premise they are further ahead than any standard node gains product to product then come with evidence of that as well. Being simply better in this aspect is not in dispute, it's by how much and also the claim that it's above and beyond "normal" and "explained only by the node".

 

There is one area alone where I will say at the CPU archecture level Apple/Firestorm is significantly ahead, single core performance for which I have already explained why. However as then and now with the caveat that only matters when it matters otherwise it does not. If the goal is to load up as many execution resources across as many cores/threads as possible then it matters not a great deal how those are logically grouped in to "cores", other than thread scaling on the high count end.

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/29/2022 at 11:54 PM, leadeater said:

Have you looked in to using something like Jamf Pro? That'll give you parity to SCCM for your Mac fleet. And yes you can run Jamf Pro on a Windows server, or Linux.

We use JAMF Pro for around 10k ipads and a couple hundred iMacs/MacBooks. We were going to go all in Chromebook but after we ordered a few hundred someone got a big head and said they preferred iPads. But in terms of onboarding we use a PXE server for Windows computers. We have some issues with networked drives and compatibility issues with most of the deployed MacBooks, they're also not price competitive with standard Dell laptops for normal use cases. If we wanted something fancier we could get a Dell XPS and still save money over the MacBooks and still no issues with docks/hardware.

Level 2 Tech Support for a Corporation servicing over 12,000 users and devices, AMA

Desktop - CPU: Ryzen 5800x3D | GPU: Sapphire 6900 XT Nitro+ SE | Mobo: Asus x570 TUF | RAM: 32GB CL15 3600 | PSU: EVGA 850 GA | Case: Corsair 450D | Storage: Several | Cooling: Brown | Thermal Paste: Yes

 

Laptop - Dell G15 | i7-11800H | RTX 3060 | 16GB CL22 3200

 

Plex - Lenovo M93p Tiny | Ubuntu | Intel 4570T | 8GB RAM | 2x 8TB WD RED Plus 

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/29/2022 at 10:11 PM, RedRound2 said:

This has nothing to do with most of my comment but whatever.

 

You do realize that the MacBook Air is the lowest end M1 device right? What Dell laptop are you comparing against? Context in this matters a lot. And its only basic and fast compared to two year old standard hardware? What? What are these "other" devices you are comparing it against?

They are not price competitive at the lower levels because they cost more to accomplish things 99% of people in an office setting don't need. If you sit in an office setting and are deployed a device and aren't part of a creative team, you're gonna find yourself with a windows laptop almost every time. Also, if they require something "fancy", an entry level XPS costs less than an M1.

Level 2 Tech Support for a Corporation servicing over 12,000 users and devices, AMA

Desktop - CPU: Ryzen 5800x3D | GPU: Sapphire 6900 XT Nitro+ SE | Mobo: Asus x570 TUF | RAM: 32GB CL15 3600 | PSU: EVGA 850 GA | Case: Corsair 450D | Storage: Several | Cooling: Brown | Thermal Paste: Yes

 

Laptop - Dell G15 | i7-11800H | RTX 3060 | 16GB CL22 3200

 

Plex - Lenovo M93p Tiny | Ubuntu | Intel 4570T | 8GB RAM | 2x 8TB WD RED Plus 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Lord Bloobus said:

They are not price competitive at the lower levels because they cost more to accomplish things 99% of people in an office setting don't need. If you sit in an office setting and are deployed a device and aren't part of a creative team, you're gonna find yourself with a windows laptop almost every time. Also, if they require something "fancy", an entry level XPS costs less than an M1.

Reliability, support, battery life, good experience, build quality is what you get out of the paying a little premium. Furthering your comparison, a cheap $100 chinese android tablet can also probably do most of what is required in that particular office. Or even Chromebooks. But there is a reason why they all opt for laptops. So this is just a stupid little comparison.

And can you get the XPS laptop with same config and performance as the entry level Macbook? I'm pretty sure you cant

Link to comment
Share on other sites

Link to post
Share on other sites

52 minutes ago, Lord Bloobus said:

We were going to go all in Chromebook

I'm curious. If you're currently using Jamf Pro for such a lot of devices, what was the strategy for managing the Chromebooks?  I don't recall hearing about a way of managing Chromebooks from an Enterprise perspective (including remote lock/wipe & so on)... To be clear, I'm not saying that you can't do that with them, I've just never come across it before so I'm interested.

 

45 minutes ago, Lord Bloobus said:

They are not price competitive at the lower levels because

because that's not a market Apple have chosen to compete in.

 

40 minutes ago, RedRound2 said:

And can you get the XPS laptop with same config and performance as the entry level Macbook? I'm pretty sure you cant

There was an XPS 13 Plus review video on LTT a few days ago which looked like a genuinely great product. I don't know the pricing of it though.  I can't say it would at all interest me though unless I was intending to make a switch back to 100% Linux again, but at this time that's very unlikely to happen.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Lord Bloobus said:

We use JAMF Pro for around 10k ipads and a couple hundred iMacs/MacBooks. We were going to go all in Chromebook but after we ordered a few hundred someone got a big head and said they preferred iPads. But in terms of onboarding we use a PXE server for Windows computers. We have some issues with networked drives and compatibility issues with most of the deployed MacBooks, they're also not price competitive with standard Dell laptops for normal use cases. If we wanted something fancier we could get a Dell XPS and still save money over the MacBooks and still no issues with docks/hardware.

Docks are a bit of a crapshoot, no matter who makes it. Dell's are proprietary, even the USB-C ones. Unless you're going to stick to sub-90w laptops, you're not going to find any current dock that works with everything.

 

Macbook (Air) and iPads are pretty much guaranteed to work with powered docks because they are 30w-45w devices.

 

Dell units, we were still deploying things with the legacy dock in 2018. This was because the Dell 77xx and 75xx were the only laptops that used this dock, and 240w USB-C WD19DC's didn't exist yet.  As soon as those docks came out, someone decided that they would give everyone these unpowered USB-C docks that only use DisplayLink... for their CAD users. I was like... no. So I made a point of ensuring that the 17" laptop users got the 240w Dell dock, because AutoCAD treats the DisplayLink monitors as software GPU's and if you start AutoCAD on the DisplayLink monitor, AutoCAD won't use the Quadro GPU at all.

 

If you see "DisplayLink" on the dock, you're in for a rude awakening if you use the DisplayLink attached monitors for anything moving. It's probably fine for looking at static data, but the latency from it being a software GPU is terrible. Don't watch video or play games on them, it will be worse than the iGPU.

 

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, Paul Thexton said:

I'm curious. If you're currently using Jamf Pro for such a lot of devices, what was the strategy for managing the Chromebooks?  I don't recall hearing about a way of managing Chromebooks from an Enterprise perspective (including remote lock/wipe & so on)... To be clear, I'm not saying that you can't do that with them, I've just never come across it before so I'm interested.

Chromebooks come with enterprise enrollment options through Google Admin which we implemented, then we do content filters through Umbrella and lock some settings up as needed through deployment profiles. JAMF is a bit more elegant but Google Admin is functional as well.

Also ChromeOS has allowed remote locking for a while through Google Admin.

 

46 minutes ago, Paul Thexton said:

There was an XPS 13 Plus review video on LTT a few days ago which looked like a genuinely great product. I don't know the pricing of it though.  I can't say it would at all interest me though unless I was intending to make a switch back to 100% Linux again, but at this time that's very unlikely to happen.

I keep pushing budget consciousness but it seems like they're not getting it.

Level 2 Tech Support for a Corporation servicing over 12,000 users and devices, AMA

Desktop - CPU: Ryzen 5800x3D | GPU: Sapphire 6900 XT Nitro+ SE | Mobo: Asus x570 TUF | RAM: 32GB CL15 3600 | PSU: EVGA 850 GA | Case: Corsair 450D | Storage: Several | Cooling: Brown | Thermal Paste: Yes

 

Laptop - Dell G15 | i7-11800H | RTX 3060 | 16GB CL22 3200

 

Plex - Lenovo M93p Tiny | Ubuntu | Intel 4570T | 8GB RAM | 2x 8TB WD RED Plus 

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Kisai said:

Docks are a bit of a crapshoot, no matter who makes it. Dell's are proprietary, even the USB-C ones. Unless you're going to stick to sub-90w laptops, you're not going to find any current dock that works with everything.

 

Macbook (Air) and iPads are pretty much guaranteed to work with powered docks because they are 30w-45w devices.

 

Dell units, we were still deploying things with the legacy dock in 2018. This was because the Dell 77xx and 75xx were the only laptops that used this dock, and 240w USB-C WD19DC's didn't exist yet.  As soon as those docks came out, someone decided that they would give everyone these unpowered USB-C docks that only use DisplayLink... for their CAD users. I was like... no. So I made a point of ensuring that the 17" laptop users got the 240w Dell dock, because AutoCAD treats the DisplayLink monitors as software GPU's and if you start AutoCAD on the DisplayLink monitor, AutoCAD won't use the Quadro GPU at all.

 

If you see "DisplayLink" on the dock, you're in for a rude awakening if you use the DisplayLink attached monitors for anything moving. It's probably fine for looking at static data, but the latency from it being a software GPU is terrible. Don't watch video or play games on them, it will be worse than the iGPU.

 

We're basically an all Dell environment so we use Dell docks for every laptop, but M1s have had issues with displaying one day and then not the next, or dropping sound. Tried just about everything.

Level 2 Tech Support for a Corporation servicing over 12,000 users and devices, AMA

Desktop - CPU: Ryzen 5800x3D | GPU: Sapphire 6900 XT Nitro+ SE | Mobo: Asus x570 TUF | RAM: 32GB CL15 3600 | PSU: EVGA 850 GA | Case: Corsair 450D | Storage: Several | Cooling: Brown | Thermal Paste: Yes

 

Laptop - Dell G15 | i7-11800H | RTX 3060 | 16GB CL22 3200

 

Plex - Lenovo M93p Tiny | Ubuntu | Intel 4570T | 8GB RAM | 2x 8TB WD RED Plus 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Lord Bloobus said:

Chromebooks come with enterprise enrollment options through Google Admin which we implemented

Cool. I genuinely didn't know that even existed. 

 

2 minutes ago, Lord Bloobus said:

I keep pushing budget consciousness but it seems like they're not getting it.

Conversely the cheapest possible solution isn't always the right one to go with either in the name of lowering budgets. I'll take you at face value in this instance and say yeah, I can well see a situation where most of your end users (I've no idea what your company is or what it does) can perform their roles perfectly with low cost Windows / Chromebook devices.

 

But on the other hand, I've worked with IT people before who are so pig-headedly against anything that isn't Windows that I find it absolutely baffling. I don't understand why so many people decline the opportunity to upskill themselves (especially when an employer offers to pay for training courses). I once worked at a small startup company and "the IT guy" (as in one person, that was the scale of the company when I joined it) absolutely refused to learn how to do anything in relation to Linux, and then he got his nose bent out of shape when they brought somebody in to handle the server infrastructure as his boss who knew what he was doing on anything that didn't have a Microsoft logo on it.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Lord Bloobus said:

We're basically an all Dell environment so we use Dell docks for every laptop, but M1s have had issues with displaying one day and then not the next, or dropping sound. Tried just about everything.

Don't know what to say to that other than if it were me I'd be raising support cases with both Dell and Apple about it.  The likely outcome is they both point the finger at each other of course. But I've been using a cheap(ish) Plugable TB3/USB-C dock since 2020 with zero issues, including with an M1Max macbook for the past few weeks.  Meanwhile at work we also have Dell docks (which IT told us were twice the price what I paid) that people have had all sorts of issues with with actual Dell laptops never mind Macbooks.

 

I'll admit that my experience is purely anecdotal, limited sample size etc, but I'm afraid I don't have a high regard for Dell's USB docks.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Paul Thexton said:

Don't know what to say to that other than if it were me I'd be raising support cases with both Dell and Apple about it.  The likely outcome is they both point the finger at each other of course. But I've been using a cheap(ish) Plugable TB3/USB-C dock since 2020 with zero issues, including with an M1Max macbook for the past few weeks.  Meanwhile at work we also have Dell docks (which IT told us were twice the price what I paid) that people have had all sorts of issues with with actual Dell laptops never mind Macbooks.

 

I'll admit that my experience is purely anecdotal, limited sample size etc, but I'm afraid I don't have a high regard for Dell's USB docks.

Dell docks work fine as long as the device plugged in has the drivers installed... in the right order.

 

What happens (on enterprise systems) is that the realtek audio drivers are broken between the windows store and windows update. If you install the drivers in the wrong order, the audio driver doesn't install the auto-switch driver when it's plugged/unplugged.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×