Jump to content

$50 vs $50,000 Computer

jakkuh_t

why unlisted?

Microsoft owns my soul.

Also, Dell is evil, but HP kinda nice.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Hinjima said:

Its not 馃檪

It was when I checked on it. Ah, probably because they set a specific upload time.

Microsoft owns my soul.

Also, Dell is evil, but HP kinda nice.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Gat Pelsinger said:

It was when I checked on it. Ah, probably because they set a specific upload time.

you must be new to being first.. they usually make the thread *just* before the scheduled launch on youtube.

(they do scheduled 'uploads' on youtube, so that they can launch the video exactly on time, even if tech issues arise)

Link to comment
Share on other sites

Link to post
Share on other sites

Loved this video, it reminds me of gaming at all price points

I'm glad LLM performance is getting showcased!

LLM unfortunately are terrible at counting. And tokens are not syllable, so it's even harder. Not surprising it gets this task wrong, it's blind luck if you get a correct word count. I guess you could find some madlad that fine tuned a model specifically for haikus and could have better performance, at the cost of being worse at everything else.

A test you can do with a big model and lots of ram is comprehension, basically giving it a large text, and ask the model about a specific information spread around the text. e.g. "what follow is a crime novel, make me a precise and accurate timeline of where Waldo has been spotted in chronological order". This is a task LLM are good at, and the bigger it is, the more difference you see in the quality of the output.

Another test is to use a multimodal model like LLaVA, and feed it images and text.

For more practical applications a I run LLama 3.1 8B model on my framework, and I use it to help writing my d&d campaign, novels. Unfortunately it's still not good enough for code at that size.

Link to comment
Share on other sites

Link to post
Share on other sites

Minecraft Java performance is a steaming pile of garbage if you run it locally, even with optimization mods. I'm not surprised that even the 14900KS had stutters. I used to play on a system that was not very far from the $50 laptop, and in servers it's actually VERY playable.

Ths video just proves my point, the sweet spot for PCs and laptops is in the $500-$600 range.

Link to comment
Share on other sites

Link to post
Share on other sites

Your radiant smile is 6 syllables, not 5. So that is not a haiku.

How did he count that as 5?

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, Slipping Jimmy said:

Your radiant smile is 6 syllables, not 5. So that is not a haiku.

How did he count that as 5?

Its a Sokka haiku

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Slipping Jimmy said:

Your radiant smile is 6 syllables, not 5. So that is not a haiku.

How did he count that as 5?

curious where you get 6 from?

Link to comment
Share on other sites

Link to post
Share on other sites

The LLM stuff peaked my interest. What resources would ya'll recommend to start learning. And does LTT offer any tutorials yet about them?

Link to comment
Share on other sites

Link to post
Share on other sites

The LLM part was kind of misleading. Better hardware does not make LLMs "better". It only makes them generate their outputs faster. The fact that the LLM managed to generate a proper haiku on the $5000 computer was roll of the dice luck (assuming that was the same model they tested the $500 and $50 PCs with).

33 minutes ago, LTTwatchr said:

The LLM stuff peaked my interest. What resources would ya'll recommend to start learning. And does LTT offer any tutorials yet about them?

Assuming you want to run stuff locally, my recommendation is to start out with koboldcpp (https://github.com/LostRuins/koboldcpp). SillyTavern also has some good community docs, here's their page on running stuff locally (https://docs.sillytavern.app/usage/local-llm-guide/how-to-use-a-self-hosted-model/). If you just want to get up and running without fiddling, there's backyard.ai (https://backyard.ai/desktop). Most of the interest in running local LLMs is around chatbots. These models are much smaller and are good at conversation but are not good at "knowing" things (like gpt-4o). Larger general purpose models like gpt-40 require a huge amount of vram that isn't really feasible to run locally (we're well over 100 gigabytes of vram).

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, LTTwatchr said:

The LLM stuff peaked my interest. What resources would ya'll recommend to start learning. And does LTT offer any tutorials yet about them?

I'm running local LLMs on my 16GB framework 13 inch using LM studio and Llama 3.1 8B 4b model. I get about 4 tokens per second using the AMD APU acceleration.聽It works out of the box for me, and it has a simple interface to download models and run them.

LLMs are a powerful tool for a narrow range of tasks. They are no use to you unless you use them for something you need AND that LLMs are good at, you are going to have to found out if you can leverage them in a useful way. I use them for novel writing, help making the plot and scenarios in my D&D campaign, research libraries and sample codes for new thing, and generate documentation for my code or explain and refactor code I found online.

E.g. Bing chat is patient, immediate and has a superhuman ability to understand compiler errors. I no longer go to stack overflow, I just paste sample code and error, and 90% of the times bing chat will either fix the code, or get me 90% to the solution. For online models make sure you aren't submitting your company code unless the higher ups allows it or have a corporate account for you o use.

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, MrZoraman said:

The LLM part was kind of misleading. Better hardware does not make LLMs "better". It only makes them generate their outputs faster. The fact that the LLM managed to generate a proper haiku on the $5000 computer was roll of the dice luck (assuming that was the same model they tested the $500 and $50 PCs with).

They were using different models for each config. And for the cheaper desktop with an AMD GPU it wasn't even using the GPU (which is kinda annoying to achieve on AMD, tbf).

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, nether1234 said:

curious where you get 6 from?

your 1聽syllable

ra di ant 3聽syllables

sm ile 2聽syllables

1+3+2=6

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Slipping Jimmy said:

your 1聽syllable

ra di ant 3聽syllables

sm ile 2聽syllables

1+3+2=6

Smile is 1. According to the dictionary

Link to comment
Share on other sites

Link to post
Share on other sites

Pretty sure the stock f3 menu in Minecraft drops the frames pretty hard sometimes, next time you get a chance you should try betterf3

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Slipping Jimmy said:

your 1聽syllable

ra di ant 3聽syllables

sm ile 2聽syllables

1+3+2=6

how do you say smile to make your chin drop twice?

Link to comment
Share on other sites

Link to post
Share on other sites

What Nvidia card would be a better priced-to-performance substitute for the Asrock Challenger d Radeon rx 6600 8gb listed in the $500 pc parts list? a 1080?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, l3x1 said:

What Nvidia card would be a better priced-to-performance substitute for the Asrock Challenger d Radeon rx 6600 8gb listed in the $500 pc parts list? a 1080?

There isnt one.聽

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, l3x1 said:

What Nvidia card would be a better priced-to-performance substitute for the Asrock Challenger d Radeon rx 6600 8gb listed in the $500 pc parts list? a 1080?

there absolutely isn't one.
NVidia is garbage in value at the low-price point.聽聽

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, l3x1 said:

What Nvidia card would be a better priced-to-performance substitute for the Asrock Challenger d Radeon rx 6600 8gb listed in the $500 pc parts list? a 1080?

3060 8 GB... assuming you can even find one for less than $200.

Link to comment
Share on other sites

Link to post
Share on other sites

Not even done with the video, but it could have been so much more informative if they would have shown at least 2 PC麓s inbetween 500 and 5000$. Now it麓s just, look we have a 50000$ PC.

Link to comment
Share on other sites

Link to post
Share on other sites

On 9/10/2024 at 8:06 PM, starsmine said:

how do you say smile to make your chin drop twice?

WHAT?

Link to comment
Share on other sites

Link to post
Share on other sites

On 9/10/2024 at 1:52 PM, nether1234 said:

Smile is 1. According to the dictionary

Pronounce it. It is 2.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now