Jump to content

captain_to_fire

Member
  • Posts

    5,289
  • Joined

  • Last visited

Reputation Activity

  1. Informative
    captain_to_fire got a reaction from soldier_ph in one of the threads of all time   
    so long and good night
     

  2. Funny
    captain_to_fire got a reaction from mecarry30 in Apple announces support for RCS in 2024   
    Looks like communist EU has succeeded
  3. Funny
    captain_to_fire got a reaction from da na in The Windows 10 and 11 free trials (I mean upgrades) are no more   
    Oh the memories 🏴‍☠️
     

     
     
  4. Funny
    captain_to_fire got a reaction from Dracarris in Nothing announces iMessage for Android ... somehow   
    I mean iMessage partially works on Windows 11's Phone Link app and it's greenlit by Apple but I doubt they'll keep this any longer. At work most of us use iPhones but we use Teams and Signal, at home we mostly use Whatsapp and both my parents use Samsung phones so the issue of green bubbles being second class citizens is a pathetic US-only issue.
  5. Agree
    captain_to_fire reacted to suicidalfranco in Nothing announces iMessage for Android ... somehow   
    cause outside of the US SMS texting is a dead technology that got massively replaced by WhatsApp in some regions, WeChat in others. No reason to use SMS when it's less convenient, more expensive, less feature rich, limited in characters and files you can send etc...
  6. Agree
    captain_to_fire reacted to hishnash in Nothing announces iMessage for Android ... somehow   
    Given that they are asking for users Apple ID PW and even asking them to grant 2FA approval to Macs that the users do not controle I think security concerns are legit.   Also this sort of breaks the entier end to end encrypted nature of things. 
  7. Agree
    captain_to_fire reacted to williamcll in Apple Set to Challenge Latest EU Crackdown on Big Tech Dominance   
    Apple should stop selling in EU and we'll see who has the last laugh
     
  8. Agree
    captain_to_fire reacted to Levent in Malware (Didn't find any specific forum so I wrote it here)   
    Every windows install has Windows Defender that is enabled by default. If that doesnt work for you, you can try Malwarebytes.
  9. Like
    captain_to_fire reacted to Agall in AMD Ryzen 8000G to feature Zen 4(c) and RDNA3 architectures with the 8700G/8600G/8500G and 8300G SKUs   
    I've seen a few reports on this, I'll be glad to see it.
     
    I will be buying the highest model day 1 and swapping the R5 7600 in my living room PC for it. APUs are just a lot of fun.
  10. Agree
    captain_to_fire got a reaction from thechinchinsong in M3 Macbook Pro Reviews. 8GB of RAM on a $1600 laptop is criticised heavily   
    The only reason I think off as to why the base 14" Pro has a paltry 8GB of memory is to push people to buy the 18GB version. But I disagree with Techcrunch's review. The 14" Pro with M3 is a better buy compared to a specced out M2 Air 15"
    More I/O (HDMI and SD card slot) Better sustained performance thanks to active cooling of the M3 Pro 14" Has Pro Motion display Better speakers But I agree with the RAM criticism. Since Apple made up a weird configuration anyway they might as well shipped the base models with 12GB RAM. Not sure if there are companies that makes 6GB LPDDR5 modules.
  11. Informative
    captain_to_fire reacted to LAwLz in OpenAI's First DevDay   
    Summary
    OpenAI, the small company that less than a year ago revolutionized the entire field of computing by releasing things like ChatGPT and Dall-E, is holding their first of presumably many conferences today. 
     
    What will be announced remains to be seen.
    Current rumors and speculations include:
    New developer tools.  New models.  A new "team" payment plan aimed at companies.  A framework for building custom chatbots.   
    Quotes
    -Probably someone at OpenAI
     
     
     
    The announcements:
     
    Starting with some numbers.
    2 million developers are developing using OpenAI APIs. 92% of Fortune 500 companies use OpenAI tools. 100 million weekly active users.  
     
    GTP-4 Turbo - A new GPT model
    GTP-4 Turbo is a new model that improves on GTP-4 in some major ways.
    It will be smarter than GPT-4, but it will also be cheaper. A LOT cheaper. In terms of API pricing, GPT-4 Turbo is 3x less expensive per input token, and 2x less expensive for output tokens.
    In other words, using GTP-4 Turbo will most likely cost less than half (OpenAI's own estimates is 2.75 times cheaper) of what it would cost to implement GPT-4 into a product. And that's with better performance than GPT-4.
     
    It will cost 1 cent for 1,000 prompt tokens, and 3 cents for 1,000 output tokens.
     
    Right now, GTP-4 hasn't prioritized speed because they wanted to keep the price down. But speed is the next thing they will work on. The speed should improve "soon".
     
     
     
     
     
     
     
    The six major improvements for developers
     
    1 - Context length.
    GPT-4 supports up to 8K (and in some cases up to 32K) of "context length" (number of tokens). This new model, ChatGPT-4 Turbo, supports up to 128K of tokens for context length. For comparison, this new model could keep a 300-page long book in memory for context. This also means it will be less inclined to lose accuracy as your conversations grow very long.
     
     
    2 - More control.
    They have implemented more features in regard to what their models outputs. For example, you can now flip a toggle that makes sure that the model will output a valid JSON format.
    It can now also support calling several function calls inside the same single message. Before, if you asked it to "raise my windows and turn the radio on", it might only have raised the window. Now it will properly do both things at once.
    "Reproducible outputs" is another new feature, and it is launching today. What it does is allow you to input a "seed parameter", and the model will try and format its future outputs after your seed.
    They are also adding logprobs support.
     
     
    3 - Better knowledge
    The ChatGPT platform will be able to retrieve new information. You will be able to feed the model with your own information, such as a database, and add it to your program you're building using their APIs.
    They have also updated the models' knowledge. It used to only contain information from before September 2021, but now it is updated with knowledge up until April 2023. They have also said that they will try and never let ChatGPT get so outdated ever again. They want it to have as much new information as possible, so they will keep feeding it new information.
     
     
    4 - New modalities
    They are adding API support for Dall-E 3, GPT-4 Turbo with Vision, and their text-to-speech model starting today.
     
    The new text-to-speech model seems really powerful. You could probably still pick up on some stuff and figure out that it's an AI model generating the voice, but it seems to be good enough to not matter for legitimate use like language learning.
    They are also releasing an update to their open-source speech recognition model "Whisper". The new model is called Whisper V3 and features updated support for several languages. Which ones remains to be seen.
     
     
    5 - Customization 
    Companies will now be able to work together with OpenAI to create a custom model. OpenAI will help companies with the development and training of the custom model.
    OpenAI pointed out that they won't be able to do this with many companies, and it will be expensive, but they are inviting companies who really want to make a big push with AI to reach out to them.
    ChatGPT 3.5 now also supports fine-tuning in the 16K model.
    GTP-4 now also supports fine-tuning (in an invite-only experimental API). Not sure if GPT-4 Turbo will support fine-tuning but it doesn't seem like it.
     
    6 - Higher rate limits
    All GTP-4 customers will get a 2x increase in their API tokens per minute rate limit.
    Customers will also be able to further increase their rate limit and get a direct quota for how much it will cost in their API account settings page.
     
     
     
     
    Copyright Shield
    OpenAI will now step in and defend their customers (ChatGPT Enterprise and API customers) in eventual legal battles over copyright claims. They will pay legal fees for you.
    They also wanted to take that opportunity to remind everyone that they do not use data submitted from Enterprise customers or API customers to train their models on.
     
     
     
    Cheaper GTP-3.5 Turbo 16K
    OpenAI is lowering its pricing of GPT-3.5 Turbo 16K. Inputs are 3x cheaper and outputs are 2x cheaper than before.
    From $0.003 per 1000 input tokens to $0.001 per 1000 input tokens.
    From $0.004 per 1000 output tokens to $0.002 per 1000 output tokens.
     
    This means that the 16K model of GTP-3.5 Turbo now costs less than the 4K model. This also applies to the fine-tuned models.
     
     
    ChatGPT improvements
    Despite this being a developer conference, OpenAI said that they will be making some improvements to ChatGPT as well.
     
    First of all, they will update the GPT-4 model to GPT-4 Turbo for ChatGPT Plus subscribers.
     
    Secondly, you will no longer need to select "Browse with Bing", "Dall-E 3" or whatnot from the dropdown menu. It "will just work" automatically from now on. If you ask it to draw an image, it will know you want to use Dall-E 3 without you having to select it.
     
     
    GPTs - Customized versions of ChatGPT
    GTPs are what OpenAI calls their new customized versions of ChatGPT. The idea is that someone (with seemingly quite little knowledge) will be able to make a custom ChatGPT version, that will behave in a specific way. You will be able to build your own GPT, using ChatGPT and natural language.
     
    One example they showed was Code.org who has designed "lesson planner GPT".
    It's a customized version of ChatGPT which is aimed at teachers trying to teach middle-school children coding. From what I understand, this ChatGPT version knows about Code.org's curriculum and intended target audience and will tailor its outputs based on that. So if you ask it what a for loop is, it will know that you're talking about programming, and give examples that will be appealing to middle schoolers, such as explain things using video game characters.
    These GPTs can be integrated (and are in essence extensions) with their plugins. You can tell a GPT to pull information from documents you feed it. In the demo OpenAI uploaded a video lecture to a GPT and told it to pull information from the video, which it did.
     
     
     
    These GTPs will be able to become private GPTs, create sharable links to your GPTs, or if you got an enterprise account you can limit a GPT to only being accessible from your own company.
    There will also be a marketplace for GPTs. The most popular and powerful GPTs will get a portion of OpenAI's revenue. I wasn't able to decipher if this is similar to the app store where users pay let's say 10 dollars for an app, and some portion goes to the store owner, or if this will just be OpenAI giving away their own money to developers who publish stuff for free, just to incentivize people to create GPTs and create an ecosystem around that.
     
     
     
     
     
    My thoughts
    I will add things as they get announced. It will be interesting to see what happens though. 
     
    Sources
    Watch the opening keynotes:
     
     
     
     
     
     
     
  12. Informative
    captain_to_fire reacted to Zando_ in M3 Macbook Pro Reviews. 8GB of RAM on a $1600 laptop is criticised heavily   
    Something else is running then, or websites are being very strange in Safari. I regularly have that many tabs open, in multiple Safari windows, usually running stuff that's pretty heavy (the Unifi admin portal is sucking 1.26GB of RAM all by itself right now), in addition to Microsoft Excel, Firefox with 10-12 tabs, my email application, Slack, Dropbox, Monday, often TeamViewer and a few other odds and ends. Base model M1 Air and I'm sitting at 7.15GB/8GB memory usage (according to activity monitor), so a liiitle bit of RAM left. Here's what iStatMenus reads for memory pressure/usage: 

     
    That said though, I do agree that 8GB on a $1600 machine is very silly. It's fine on a $999 Air that's meant for office work/web browsing, Pros are usually meant for beefier work so while it's technically fine, it certainly feels rather shit as a customer. 
  13. Funny
    captain_to_fire reacted to leadeater in Launch event for Core Ultra and 5th Gen Xeon CPUs. 'AI is everywhere' says Intel   
    Could websites please stop asking which images have palm trees then please lol
  14. Like
    captain_to_fire reacted to leadeater in Trick or M3-treat? - Apple’s pre-Halloween “Scary Fast” virtual event   
    It's a lot harder to match the power efficiency of EPYC in scaled out core counts, those products are power efficiency focused unlike the desktop CPUs and don't even allow as high boosting as many of the laptop ones either. AMD has 64 core options in 155W (Zen4) and 225W (Zen3), but these offer no GPU at all where Apple would be. Personally I think Apple took a step back and was evaluating if scaling out past 2 even made sense at all for the product, I don't think they ever were actually looking at doing 4. Maybe in some side thinking but I can't imagine it was considered that much.
     
    4 chips would make an exceedingly lager package even without the memory, from what I understand something like that simply isn't viable just due to the physical size.
  15. Like
    captain_to_fire got a reaction from leadeater in Trick or M3-treat? - Apple’s pre-Halloween “Scary Fast” virtual event   
    The ultra line remains as two max chips linked together with a 2.5TB/s low latency interposer (or as Intel calls it their version an embedded multi-die interconnect bridge/EMIB). The rumors of a quad chip started way back since the M1 ultra. But some leakers say that the reason Apple didn't do a quad chip for the Mac Pro is apparently that simply adding more chips didn't scale the performance and power consumption to what Apple wants, maybe they're trying to achieve a similar performance as a 4th gen EPYC but at 1/2 of the power but Apple can't deliver so they probably said "fuck it" and put an M2 Ultra on a Mac Pro.
     
    So yeah the Mac Studio basically killed the Mac Pro. In essence, the Mac Studio is what the the 2013 trash can should've been.
  16. Agree
    captain_to_fire reacted to hishnash in Trick or M3-treat? - Apple’s pre-Halloween “Scary Fast” virtual event   
    And that 45W is full SOC power draw.. an i9-13900KS will also have chipset power draw memory etc...
  17. Like
    captain_to_fire reacted to missell in Trick or M3-treat? - Apple’s pre-Halloween “Scary Fast” virtual event   
    Oh wow. I completely overlooked this. I've been planning on getting a 16GB/512GB 15" Air toward the end of the year, the difference between the same specced 14" Pro is roughly $200 AUD. Absolutely worth it for the additions and newer platform IMO.
  18. Agree
    captain_to_fire got a reaction from thechinchinsong in Trick or M3-treat? - Apple’s pre-Halloween “Scary Fast” virtual event   
    Even if Qualcomm manages to make a chip on par with Apple's, the major obstacle would be Windows itself and developers. Windows itself is bloated because it cannot let go of legacy stuff that enterprises uses unless they want a major outcry from PCMR cavemen to businesses. 32-bit applications for instance are still supported in Windows 11. Also, will devs both big and small be willing to do the work in porting their programs into ARM64 instructions.
     
    Apple on the other hand can easily change their OS without major outcry. Is there anyone who complained when Apple axed 32-bit on Macs? I bet Apple could make the upcoming 2024 macOS an ARM exclusive and not a lot of people would raise a stink about it. I think the 1st gen PCs with Snapdragon XElite would become a hard sell because it'll probably be priced more than their Intel/AMD counterparts.
  19. Funny
    captain_to_fire got a reaction from Needfuldoer in The Windows 10 and 11 free trials (I mean upgrades) are no more   
    Oh the memories 🏴‍☠️
     

     
     
  20. Informative
    captain_to_fire got a reaction from thechinchinsong in Trick or M3-treat? - Apple’s pre-Halloween “Scary Fast” virtual event   
    Bummer that the MacBook Air didn't got the M3 update. So I'm guessing they're going to kill the 13" MacBook Pro this time since the 14" will now start with the vanilla M3.
     
    Edit: Yep the 13" Pro is gone
     

  21. Informative
    captain_to_fire got a reaction from thechinchinsong in Trick or M3-treat? - Apple’s pre-Halloween “Scary Fast” virtual event   
    14" MacBook Pro now starts with the M3, not M3 Pro, all chips of the M3 fam supports ray tracing
  22. Like
    captain_to_fire got a reaction from thechinchinsong in Trick or M3-treat? - Apple’s pre-Halloween “Scary Fast” virtual event   
    M3 fam now supports AV1 too
  23. Like
    captain_to_fire got a reaction from hishnash in Trick or M3-treat? - Apple’s pre-Halloween “Scary Fast” virtual event   
    Bummer that the MacBook Air didn't got the M3 update. So I'm guessing they're going to kill the 13" MacBook Pro this time since the 14" will now start with the vanilla M3.
     
    Edit: Yep the 13" Pro is gone
     

  24. Agree
  25. Like
    captain_to_fire got a reaction from hishnash in Trick or M3-treat? - Apple’s pre-Halloween “Scary Fast” virtual event   
    M3 fam now supports AV1 too
×