Jump to content

Thaldor

Member
  • Posts

    1,304
  • Joined

  • Last visited

Everything posted by Thaldor

  1. I don't know what is included in the media bargaining code but depending what it is these are my opinions: 1) The scrutinized part is including news site's own summary of the article with the link. In that case I am fully fine the social media must buy licensing that content because that is someone elses content they are using. 2) Including platform written summary of the article with the link. This is kind of asshole move but greyarea especially when then news media starts to be so damn awful with their clickbait the YouTubers seem to be not misleading with their thumbnails and video titles. I would probably directly move the linking fees to the users and make the users pay for that all because seriously, I would pay for that service so I can go fast through world events without clickbait garbage that the modern news media likes to sow. 3) It's not the summary, just that the social media platform links to the article in any form (be it the page title, article title or just the plain URL). News media got completely greedy and they should just crawl under a rock and die. Social media platform should just right out delete every post linking to those sites and make getting to those news sites as hard as possible (no hyperlinks in any form, if user wants to it's typing the URL, copy-paste or through external search engine) and as a cherry on the top, ban the news medias from the platforms (they didn't want their content to be linked on the platform so why would they want to be on the platform in the first place?).
  2. If you are so much against the power consumption, realistically you can't do much else than have it collecting dust or take the $500 and sell it. Running it for Steam Link to stream games to mobile devices is pretty stupid if you have better PC and most likely if you're going to be streaming a game to a mobile device, you won't be using your PC (unless rendering or compiling or something) so you could just as well keep it running and stream from it. Using it for renderfarm or similar is also kind of stupid if you really don't need two PC rendering farm. Using it as NAS/media PC is bit stupid as it's way overkill (and the power consumption). You probably don't have 3D printer so having dedicated PC for that is out of the picture. HTPC is quite niche these days with Steam Link and AndroidTV's everywhere. You could use it to host dedicated servers for games but that would require that you play games that use player hosted servers. You could always nail the parts to a wall to make an art piece out of them if selling them or just holding on to them seems waste. (Just saying)
  3. Nintendo doing Nintendo stuff, you reap what you sow (that goes both ways, going against Nintendo and people supporting Nintendo by buying the overpriced and kneecapped Nvidia Shield (originally K1)). Ars Technica breaks the thing down a bit more for those interested. Basicly Nintendo is doing what Nintendo does, fights windmills with Mario money, going after decades of rulings favoring the "Bring your own mallet" emulation, testing the DMCA's ban on breaking "developed" copyprotections, arguing that even PHYSICAL games are only licenses to use that one single copy, downplaying homebrewing (which I kind of understand because like how many actual homebrew games have been released? There's mostly one of each necessity programs like media player and browser and if they have huge flaws then a second one, leaving emulators out of this one for "you know" reason), accessibility "Yuzu would need to point to some ACTUAL example of accessibility" and mostly Nintendo seeking to frighten off people from emulating the overpriced and kneecapped Nvidia Shield (originally K1).
  4. The thing is in the word "reference". The point isn't to have median value, the point is to have exact and set value that will be globally the same with every other device required to show that same value. Instead of calculating median flight path from the flights, it's a made ideal flight path to which every flight is compared to. Except in case of image and sound the ideal happens to be pretty boring and bland because humans are odd and like a bit more saturation and contrast to make the image more captivating, bit more bass and treble to make the music sound more alive and bigger and so if the image/music is good on the reference gear, it will be great on the consumer gear.
  5. I have to say now that I have taken free deep dive into the premium quality Tidal, that there is a difference. They have some very bad mixes. That can be also my taste in music but, seriously, holy fucking shit where the fuck they have gained so flat mixes that I have heard more dynamic planks in hardware stores? I had to go through and I even disabled any post-software EQ's and all that to find out was it just something in my signal chain that makes Tool sound like wooden hammer with as much balls as castrated cat and Deep Purple sounded like someone recorded them from C-tapes, even my self ripped LP rips from my fathers collection somewhere around 90's to early 00's with garbage level mp3 had more dynamic range than Tidals whatever mix they found from the Goodwill. And I don't even want to talk more about Rainbow... Stargazer's into snare is dead on Tidal mix... And I have some really high quality rips and some absolute garbage rips from the Rising album but none managed to turn "tshhhh" into "tsk" (even the Deluxe Edition both mixes lack the lower spectrum compared to the SHM-CD from which the "New York Mix" should be from, those are way better mixes than the non-Deluxe ones on Tidal but I have no idea where they got that bad mixes). And I am not talking about something that would come from fileformats or bitrates (while those can affect but not this much). The low level rumble is just gone especially from Tool, in Jambi it's like Tidal mix was afraid to break something by really driving that bass and before someone says I have higher quality rips from my CDs and CDs have higher quality audio, no, Tidal should have "superior dynamics" because 24-bit and 96kHz compared to my 16-bit 44.1kHz FLACs but no, stuff like the Schism's 1:20 bridge is just not there and 2:00 mark sounds like someone booked Tool to play in a fucking school disco because the soundstage sounds that small. I wasn't sure was I fucked up or just added some EQ to my rips and started to strictly compare Tidal to Spotify (without any EQ) and there was the same thing, Tidal mixes were flat. Spotify mixes were as far as I can tell identical to my rips. Also how fucking stupid must someone be to fall for the "Max/MQA" garbage? They don't even fucking hide it that it's just some mumbo jumbo garbage, they literally say directly "Max/MQA up to 24-bit 192kHz", just to make it even clearer: "UP TO" No, they aren't saying the "Max/MQA" quality is something superior from "Max/FLAC" or whatever High or anything. It's just probably up to somewhere there or not even close to but fuck they tell you. Same garbage scamming as I would sell my car as "You know Toyota Yaris can go over 300km/h on dirt roads?" and in reality, yes, Toyota GR Yaris (Rally2) tuned and made for WRC can fucking rip the dirt road, but my Toyota Yaris 1.0L can hardly get to 150km/h downhill on highway, but hey "Toyota can go over 300km/h", I am not wrong but I am misleading you like a pig to the slaughter house. They're not afraid to tell the bitdepths and sample rates for FLAC recordings but MQA and you get basicly no information unless you rip the songs and look at it by yourself at which point they would probably pull the normal audiophile scamming argument "it's not our system, it's yours" even if you had airtight proof that it's non-altered file from their servers. And when it comes to audiophile stuff. Always remember that most of the Beatles albums were rough mixed with the Beatles in studio that only operated in mono and everytime you hear stereo mix of the Beatles, it's completely mixed by some audio technician as "extra mile" without the Beatles giving two fucks about it. Back then and earlier stereo systems were so rare that pretty much no one had them and what little stereo mixes were done were mostly just to market that the studio can do it, not that the music sounded any better, just that the studio had the capability to do it. That's an example what goes in the other side, unless you have insider knowledge, you will have pretty much zero idea how the thing was supposed to sound and the best you can do is to listen it however you think it sounds the best. So, go crazy and experiment, add bass, twist the V, play with the reverb and remember that there's a reason why it's "studio monitors" and "high-end HiFi" separately.
  6. One magic word: Steaming. Carrots, green beans, broccoli, asparagus, sprouts, potatoes, pretty much whatever you fancy chopped in uniform size (nice big pieces, baby carrots are pretty much the minimum size you would like because you want that steam to get everywhere and move freely), steam for 5-15 minutes, add some olive oil, pinch of salt, freshly ground black pepper and if you want some herbs and lemon juice (butter and honey are also excellent). I would recommend investing into layered pot so you can have good amount of water without needing to care about adding more and couple layers of steaming pots so you can add hard stuff like carrots, potatoes and cauliflowers early and just add the next layer with green beans, broccoli and other soft stuff in the middle (you don't want them too soggy). But you can just use something like metal strainer in deep pot to keep the veggies off from the water (you don't want to boil them, so not in the water, on top of the water in the hot steam). You can also just use microwave safe bowl, put veggies and half-deciliter of water in it, close tightly with lid or plastic wrap and throw in microwave on high for like 5 minutes, check if good and if not add more water and repeat. The best part is, you cannot fail. Try them with a fork and if too hard, let them sit couple minutes more, if they get too soggy, that's the worst, not nice to eat but don't worry, you cannot burn the veggies so just throw them into blender or mash them into purée and they are saved.
  7. LTT isn't tech support, especially for people with actual needs and problems and not want to be janky AF and fiddle around everything all the time. I am not saying LMG didn't have people who know what they do and have expertise and wisdom in things they do. It's more just that LTT viewers probably don't want to watch 15 minute recap of setting up good old solid storage that doesn't have any glitter, ADHD, brand spanking new features and doesn't boast the newest and most powerful hardware possible. Building rocksolid editing rig that isn't boasting halo products and doesn't have some "wow"-thing in it and installing normal NAS just isn't LTT video and I presume staff knows it and won't be doing that kind of video but would stuff it like thanksgiving turkey with that kind of stuff that would lead to janky as fuck shit that "just doesn't work" but requires support and care afterwards more than people would give and even could give. Like imagine some (imaginary) Top Fuel dragster channel making video about fixing someones work truck, could they do it? Totally but the guy probably won't be very happy with his truck now running only with nitromethane and being incapable to drive longer than a minute and requiring engine rebuild after every couple of drives and the viewers wouldn't like to watch them rebuild just everyday diesel engine without it being at least rigged with enough horsepower to do drag racing.
  8. There's a ton of Creative marketing speech. The SBX (Pro Studio) Surround is just effect, just like the Crystalizer, Bass and all that under theere in the software, the slider controls how strong the effect is which would be kind of stupid if you had actual surround sound (like slider to turn your 5/7.1 system into 2.1 system). The Surround Virtualization works for headphones and still does the same as it's earlier iterations have done. If you activate it and check Windows audio settings, you will see it has changed your settings to 7.1 speakers (if you already didn't have) and basicly takes that 7.1 sound and does the same as the DTS/Dolby Atmos does and mixes it to 2 channel audio. It's basicly the same but further down the signal path so it doesn't receive the spatial audio data (the sound source position compared to the listener) but just the speaker output and down mixes that for headphones. I say "basicly" because it's a bit more complicated, like level of simulating human head in a room with the audio source and calculating the differences between volume and timing ears hearing the sound and all that. Big difference here is that spatial systems calculate what listener HEARS while old surround system plays the sound from the SOURCE (more/less clearly; Surround system tries to play the gunshot where the gun is and your hearing can then deal with the rest while the spatial system calculates how the character you play would hear the gunshot). When it comes to games and softwares and those Windows spatial sound options it's more of that if games support actual spatial audio, they send the DTS/Atmos/whatever the object positional data (where the sound source is) and the software does the processing, while with games that don't support them they fall down to the line with that Creative Surround Virtualization and just take the surround sound that the game gives out. TL;DR: The Surround Virtualization is working system for headphones but just know that if you put DTS/Dolby Atmos/whatever spatial audio system on, you may have two similar systems on top of each other and both work a bit differently that may cause problems.
  9. Just to clear one thing, the SBX Surround is just spreader, not virtual surround. As in, SBX surround is an effect that spreads whatever sound sounding like it would be surround, it has zero idea about spatial audio or anything relating to position of the sound. It's same as some old stereo spreaders that took 1 audio channel in, used some high and low pass filters to create 2 a bit different audio streams and gave 2 channels out, it made mono mix sound a bit more like stereo mix but in no case turned mono mix into stereo mix. DTS and Atmos, at least when it comes to their Windows encoders, are more down mixers, they take in the actual surround mix of the audio and down mix it to stereo retaining the positional "information" (as in adding delays and stuff to trick your brain to think sound comes from certain direction, as in sound that is coming from the back right channel gets tiny bit lower volume and is played 0.something milliseconds earlier on right channel than on left channel with even more lowered volume, and yes, our hearing is that stupid in reality). They're basicly turning surround sound into the ages old virtual barber demo.
  10. Both big ones are high-power connectors (3-pin 1-phase 200-240V 16/20A AC, 5-pin 3-phase 300-480V 16/20A AC) and the XLR is often audio connector and DMX512 (or XLR5) is often used in stage lighting, only thing that is similar with them is the amount of pins. You can guess what would happen to the vocalist if instead of the +48V DC Phantom power the mic received 240V AC, that's pretty much same as your phone getting the +48V Phantom power instead of some leak power (milli/micro volts) through the combo-jack. My best guesses are either that's some Cannon connector made into a grounding bar or used as system to not loose your important poking thing, that someone might even mistake for a key. Also to get your daily dose of cursed tech, just head to r/spicypillows, not that the tech is cursed but there's some really cursed pictures right there.
  11. Favorite? But for real products, the 3.5mm TRS to dual-XLR adapters. They're fine if you really know what you're doing but once you connect your phone to the +48V mic port for AUX input, oh boy, are you going to have "fun" time.
  12. ShortCircuit video out and it perfectly demonstrates what I am saying about being something less. The hardware is fantastic, minus the weight, the plastic front which is more marketing problem (sorry, laminated glass) and the complete lack of learning from others when it comes to make comfortable headbands (that "ugly" and "hair messing" band coming directly up from the headset is pretty big thing until we get headsets that weight as much as eyeglasses). The software is where Apple drops the ball almost completely. What the fuck is that Mac pairing? Joke from the early 2000's? Just to fuck with Apple fans, I am writing this with my main rig but I am lying in my bed in completely different floor of my house and watching anime on second screen while some random social stuff is on the third screen. It's a bit pain to use only two fingers but hey, that is a lot more than what you can do with 7 times more expensive headset. And I am not even yet leveraging 3rd party software like Virtual Desktop that would allow me to do this over the internet from Bogo Bogo island while sipping Pina Colada from coconut. It may not look as good but this is all software which used to be Apples core expertise, the very reason why Apple has mopped the floor with their "new" products. At least the foveated rendering is applied everywhere and not just in Unity "apps" and the hand culling isn't as bad as with the Marques videos but still pretty far from what you can do with Leap (and taping that thing into other headsets is so common that Index even has a perfect slot for it, it's just $130 or something around there and I would think with a bit of deving and correct angling you could do 10 finger typing with it). At least Apple is using IR-lights to illuminate the surroundings so you can use the hand tracking in dark. Also kind of bummer that the Vision Pro pass-through delay is just 12ms, which is exactly the same as with Snapdragon XR2 gen 2, depending on resolution and stacking, but yeah, not that big deal, great, but nothing too special. There's a ton of work for app developers to flesh out the Vision Pro and new waters to search. Hopefully Apple developers get the potential of the hardware and Apple really isn't just throwing hype features to the wall and watching passively from the side which ones devs manage to get stick. Stuff like making the Mac pairing better (or at least the level the competition is), looks like the hand gestures are also on very primitive level and I would say getting some bigger players interested on the platform is something Apple must actively do especially when it comes to games, like it didn't take long to get big players interested about VR in the first place, we have seen Bethesda and couple others failing pretty badly just because they didn't get what VR is all about, Crytek with The Climb has been long time good and surprisingly Ubishit really managed to make something good with the Assassin's Creed Nexus VR (seriously, that is GOOD, maybe a bit problems here and there but which doesn't have some).
  13. And then there's this: I already called horseshit on the specs and method, it's good to know that audiophile snake oil is still complete horseshit especially when someone throws some actual testing into it. There was also just recently Techmoan video about MQA-CD's and the outtake was pretty much "they seem to be just louder" (the classic "higher-quality audio" trick, just make it louder).
  14. Fast search and the MQA seems to require DAC that supports it, otherwise you get "inferior" samplerate. Blablabla... Something about folding the samples and the unfolding them to get higher samplerate out of lower samplerate, blablabla... Up to 192kHz... Yeah, horseshit. Let me put it this way. Pretty much only place where you could find an audio studio constantly using over 48kHz samplerate and over 16-bit bitdepth will be something like movie sound effect studio where they need to do a lot of processing to the samples and they have carte blanche to do whatever. Smaller studios pretty much only work with 16-bit 44.1/48kHz because the equipment is a lot cheaper, professionals move just to the 24-bit but still a lot of it is done with 16-bit because easier to handle, cheaper (we talk easily in 10-100's of thousands) and the difference in the end result is like fart in Sahara, especially when you consider that people who think they can hear the difference with their audio gear and who would burn their pants if it wasn't what they think it is are like 0.something% of the audience. There can be something like Sony flagship studios and whatever that have all the bells and whistles and deal with million dollar mixer tables just so that they can handle the 192kHz 128-bit audio, but it's in the name, "flagship studio", their main objective is to look good while some extreme big name artist is there seemingly recording a song (that they never ever recorded in their home studio) and investors and press can be amazed. Also also, that "flagship" studio probably deals heavily with analog audio because reasons. But MQA is apparently completely DAC thing and if it really is that your computers soundcard EQs shouldn't affect it, unless it's even more horseshit and there's audio on your PC that it's EQ can affect and then the whole "authentication" process of the MQA is, well, your PC just applied EQ to the track before it was decoded by "authenticated" hardware. Is it good, is it bad? If you think putting paperbag on your head and hiding under the pub table from the Earth destroying meteorite makes a difference and makes you feel good, who am I to judge you and your opinion?
  15. We're talking about some really old stuff. Like this old: https://www.forbes.com/2009/08/04/online-anime-video-technology-e-gang-09-japan.html?sh=618fab5f576a I would remember HS starting only after the great purge of CR when Bandai and Funimation were going against CR's then investor because CR was very publicly profiting from piracy. And I would remember HS even starting from the point that CR was rippingoff subgroups and they wanted to get something back by rippingoff CR.
  16. If you're questioning the quality difference, do as I have outlined in the Stahlmann's thread: Internally (with software within your PC so none of your audio equipment matter) record the exactly same song from both services at your chosen qualities, import them to a DAW like Reaper or whatever as long as it can do phase reversal, make sure both tracks start at exactly same time and play exactly the same (zoom in to first sound wave or find some easy to align peak and match both tracks to that), reverse the phase of one track and play both tracks at the same time. The phase reverse will negate the sounds from the normal track and what you have left is the difference between the tracks. Just to make it clear: None of your audio equipment matter before the last step of listening the outcome. If you need to crank your volume up a lot to hear the outcome, well, the difference between the tracks is so minimal it basicly doesn't matter at the listening levels you would normally use. If you need to boost your volume to hear anything, that is the same as running the audio into oscilloscope and zooming into singular sound wave and comparing the differences of 0.000..0001V region where you will find differences from anything and everything. It is also the isolated difference, so there's nothign else than the difference so if you hear a small hiss, hearing that hiss through the track is probably pretty damn impossible.
  17. I remember when Crunchyroll was blatantly stealing subs from fansub groups by cutting out the credits and starting to collect money from using their service like they were something more than a pirate site. Then they got couple garbage level publishers behind them and sued a ton of fansub groups because "ripping their subs" (even when it was vice versa in the first place). And then there were the time of "ooops, didn't mean to do that" when some titles were first shown as "This title is only available to paying subscribers" and when you got the subscription "This title is not available in your country". Then there were the upscaled titles which were... Let's just say 360p needs a bit more than just stretching to get into 720p. Even still from some corners of the internet where I lurk I occasionally hear complaints about the quality of Crunchyroll subs and get giggles because their subs have sucked pretty much from the get go when they tried to be nice and not ripoff fansub groups. Why am I not surprised they would be doing something like this too? (My memory can be a bit hazy about a service which I am each time it comes up more surprised it's still a thing and even more surprised some people pay for that shit. So, I can be wrong in some points but there's a wild history when it comes to asshole named Crunchyroll.)
  18. The features and the product is good. Screens are awesome, the lenses might need a bit thinking especially that you could use glasses with them, the cameras are top notch and the processing is good. But at least what I have collected, the current software is just bad. Like that bad the OG Vive has better features than the Vision Pro in the use the Vision Pro is kind of marketed and what has been always specialty of Apple. And I mean that in SteamVR you can project all of your monitors in VR with OG Vive and use them completely in VR without reaching for your mouse and keyboard, what I have seen with Vision Pro it is only the main screen of your Mac and you can only use it with the peripherals connected to that Mac, which is a complete joke. Even Facebook has the same feature built in to the Oculus software and you can do exactly the same than with the Vision Pro but with all of your monitors, even with Quest 2/3 handtracking. Somehow this is so unApple thing that the software side is just "whatever, we let someone else figure everything out". Like even the basic eyetracking stuff like foveated rendering is actually only for apps/games that use Unity because Vision Pro strictly uses the foveated rendering built in to Unity that everyone else can also use and at least HTC, Varjo and Facebook also have their own systems to leverage their own hardware better than the Unitys generic system. I don't know does it even support gestures wit hhandtracking. At least in Marques's video he gets the basic menu out only with the knob which seems kind of primitive compared to Quest's "look at right hand palm and touch index finger and thumb to summon handtracking quick menu". Not to even mention that using BT controllers for gaming is directly back to Oculus DK era headsets. I expected at least some kind of motion controllers just because even the way more developed handtracking systems are still bit in the development (mainly getting the skeleton always right even when hand is only partially visible and in bad lighting situations and yes that includes IR cameras like what the Leap Motion uses and even with LiDAR it's still a problem).
  19. That's just details, my main point stands, the external battery frees them from the constraints of fitting the battery into the headset and at that point the 2-2.5h usetime is bad. Making it larger isn't a problem anymore, like 500g battery would be massive in your pocket but if that means they could be pushing the usetime to 4-5h hopefully closer to 6h, that 500g in your pocket or on your belt is nothing. Of course you can connect another battery to it and all that but like the biggest problem with standalone headsets is generally the battery life and especially Quest 3 it's bad without the extra battery strap or doing as I do and going wired to +45W charger (also wiring to any external battery is bit difficult since the usual 20W USB PD/QC3 charging isn't enough to keep the headset on), like if my 88.8Wh car starter powerbank could charge 45W through USB-PD (sadly just 20W), I would so get a belt for it just to carry it's 560g weight and have like almost 5x the battery life Quest 3 (18.88Wh battery) and looking at Marques video, that 560g battery isn't that much bigger even when rugged.
  20. The big thing, what is this actually for? The best I can come up with is VR/AR extension to Apple ecosystem, nothing more, something less. Realistically I expected far more polished things from Apple. There's things I expected Apple to do and I expected them to do them way better than they did, like hand culling, that was something I really looked forward to because it's something where I expected Apple to do the best shit ever and it's not as bad as with Quest 3 which basicly doesn't have one but it's far from what you can do with Ultraleap. Another big thing I expected but was let down with was the Mac integration, I expected Apple to blow the competition, especially because they did market "spatial computing" more as extension to desktop and mobile stuff rather than going directly to the VR/AR/XR area, I expected out of box you could connect to your Mac and have at least 4-6 screens in virtual environment with at least 1 with complete interaction possibility (basicly turning one virtual screen into fully fledged touchscreen) and you can get just the one screen used with the trackpad and keyboard. Both of these can be worked with software but what cannot be updated is the awful frontal screen. Just what the fuck Apple? So much noise about "being present" and all the mumbo jumbo and this is what we got? Like seriously, is that some kind of joke and after first update or couple they fully activate the screen and you get more than 10 pixels to use? Materials are what I expected, way too heavy. Glass is glass and metal is metal, no matter how much marketing and religion you throw at them. Over 600g on your face and it doesn't even include battery is just bad. Like Quest 3 is about 100g lighter and includes the battery. Also the lack of upward support is a fucking design joke, one thing learned from every headset so far has been that you NEED to yank the headset upward from the front for comfort, even the teeny weeny Widescreen Beyond (against which Vision Pro is massive) gets a lot more comfortable with the over the head strap installed and we are talking about a headset that is lighter than some eyeglasses. The battery is also in the joke territory, 3,166mAh, I give Apple points for that so small battery can last that 2-2.5h, but holy shit, going with external battery which basicly frees you to use massive battery capacity because you aren't bound to keep it within the headset, using 10,000mAh or even bigger battery for thrice the use time (if you are comfortable enough to use it more) shouldn't be a problem. But maybe the actual usecase will become clear in the future, especially if the certain problems that can be fixed with software are fixed, but currently, it's expensive niche product which lacks a lot and probably mostly because someone in San Francisco had their head way too far up their ass to thinking they are making something unique instead of making something more familiar and completely destroying the competition.
  21. Because it's ridiculously effective and cheaper than doing it yourself. Say which one would you trust more when looking for a new car: A) The car manufacturer telling you their car is the best ever for you or B) Your friend Suzie being happy with her new car? B, unless you know that Suzie is a complete idiot and you wouldn't trust her even to boil water without burning the whole house down, but you would still give Suzie's opinion more weight than the pure ad of the car manufacturer. So, now you have your friendly neighborhood Linus Sebastian bringing you daily whacky content on YouTube and he says putting seaweed into your water will turn your life upside-down and cure even the stupidity of Suzie. He is your friend, he is smart, he makes whacky videos, he would never lie to you. Will you buy the seaweed brand Linus recommends to you? Traditional marketing is super expensive. For top end example, you want your ad into the most watched yearly sport event in the US with crazy reach, that's going to be 30-second ad spot on Superbowl and your marketing budged has just realized to be over $7,000,000 and you still need to pay for the actual making of the 30-second ad. And that still doesn't mean anything and your ad could be just a fart in Sahara if the half time artist has wardrobe malfunction. Normal TV ads depend on time they are shown, length and the show during which they are shown but I would remember we are talking about hundreds to tens of thousands per play. And those are just to get the ad spot, you still need to pay for the actors, camera crew, editor, writer, get the equipment, the shooting location and all that. LTT video ad spot probably costs around $5-10k and it includes making of the ad, per one million views that is 6th of the Superbowl ad spot cost at worst (LTT videos collect around 1-2M views while Superbowl 2022 gained 115M views).
  22. Thaldor

    The most scariest thing about Palworld isn't ho…

    Did I say they would? What I mean by "AI oddities" are things like you needing fire bow to shoot fire arrows. That would be something I would see ChatGPT would spit out because many games have fire bows that will set the arrows on fire and many games have fire arrows that ignite when fired, so it could make 1+1=3 and you need fire bow to shoot fire arrows. While human designing it would take just one of them, either fire bow or fire arrows because requiring both is a bit too much work. Especially when there then isn't separation between bow arrows and crossbow bolts. Logic can be made but it isn't the most clearest logic.
  23. Thaldor

    The most scariest thing about Palworld isn't ho…

    At least I don't know if there's any definitive statements from reputable (or non-reputable) sources about it being made with AI. It's more the feel I get from it that at least AI has been used a lot in this one. The whole game has the feeling of being assetflip, like everything seems too familiar with something else without being exactly it. If it was made procedurally, it would have a ton of repetition, like the dungeons build from like handful of pieces with very simple rules. But with "AI" (realistically, I don't like to call anything AI since we don't have a single real AI yet, they are just content aware copy-paste algorithms) things get a bit more complicated and you can get stuff like multiple different forests where there is no repetition and they're still not 100% randomly generated. I have been working with random generation and procedural methods for sometime now and there's the quite clear things that give them out, like random is always random and either you spend a ton of time to deal with every single possible thing or there will be odd stuff here and there, with procedural you can see the template quite easily, there will be repetition. With AI you don't get the clear repetition but you also don't get the complete randomness. However with AI you get these "odd things", like any AI image I have seen gives itself out, like it just feels "off" and then there are these little things like "the room has many lamps but none of them are the same", stuff that if made by human it would be really odd of them doing it that way. I don't know how to put it well. Like many thing in Palworld feel more hand made than done by simple algorithm, but they are still off and definedly not man made.
  24. The most scariest thing about Palworld isn't how much it rips off pretty much everything (Pokemon, Zelda, Satisfactory, Rust and probably a ton of others), neither it's that it's mostly made with AI (you can see this from pretty much everything, the size of the map, the placement of things, the nonsensical "lore" of certain characters...).

     

    The scariest thing is that it works. There's games with massive teams, massive budgets, a lot of heart and soul poured into them, that are in way worse state and I don't even mean games like CP2077 or NMS with bad launches, I mean games that have had great launches. Is it perfect? Absolutely not, there isn't such a thing as perfect game, every game has bugs and problems. But if anyone would have said we would have mostly AI made game in 2024 that isn't a complete train wreck but actually competes in functionality in the top tiers, I would have said "bullshit" and being dead wrong.

    1.   Show previous replies  3 more
    2. ThankGodItsFriday

      ThankGodItsFriday

      28 minutes ago, Thaldor said:

      At least I don't know if there's any definitive statements from reputable (or non-reputable) sources about it being made with AI.

       

      It's more the feel I get from it that at least AI has been used a lot in this one. The whole game has the feeling of being assetflip, like everything seems too familiar with something else without being exactly it. If it was made procedurally, it would have a ton of repetition, like the dungeons build from like handful of pieces with very simple rules. But with "AI" (realistically, I don't like to call anything AI since we don't have a single real AI yet, they are just content aware copy-paste algorithms) things get a bit more complicated and you can get stuff like multiple different forests where there is no repetition and they're still not 100% randomly generated.

      I have been working with random generation and procedural methods for sometime now and there's the quite clear things that give them out, like random is always random and either you spend a ton of time to deal with every single possible thing or there will be odd stuff here and there, with procedural you can see the template quite easily, there will be repetition. With AI you don't get the clear repetition but you also don't get the complete randomness. However with AI you get these "odd things", like any AI image I have seen gives itself out, like it just feels "off" and then there are these little things like "the room has many lamps but none of them are the same", stuff that if made by human it would be really odd of them doing it that way.

       

      I don't know how to put it well. Like many thing in Palworld feel more hand made than done by simple algorithm, but they are still off and definedly not man made.

      You may not be entirely wrong, however keep in mind that it had been in active development as early as 2021. If it was mostly AI generated, it might not take as long to be released as a game which has been actively worked on by human developers.

    3. Thaldor

      Thaldor

      31 minutes ago, da na said:

      Ai can't build a whole 3D model from scratch...

      Did I say they would?

       

      What I mean by "AI oddities" are things like you needing fire bow to shoot fire arrows. That would be something I would see ChatGPT would spit out because many games have fire bows that will set the arrows on fire and many games have fire arrows that ignite when fired, so it could make 1+1=3 and you need fire bow to shoot fire arrows. While human designing it would take just one of them, either fire bow or fire arrows because requiring both is a bit too much work. Especially when there then isn't separation between bow arrows and crossbow bolts. Logic can be made but it isn't the most clearest logic.

    4. da na

      da na

      On 1/26/2024 at 10:02 AM, Thaldor said:

      Did I say they would?

       

      What I mean by "AI oddities" are things like you needing fire bow to shoot fire arrows. That would be something I would see ChatGPT would spit out because many games have fire bows that will set the arrows on fire and many games have fire arrows that ignite when fired, so it could make 1+1=3 and you need fire bow to shoot fire arrows. While human designing it would take just one of them, either fire bow or fire arrows because requiring both is a bit too much work. Especially when there then isn't separation between bow arrows and crossbow bolts. Logic can be made but it isn't the most clearest logic.

      ...?

      That has absolutely nothing to do with AI

  25. Well, not really. He did a lot of tests to try to separate the metal from alumina (aluminum oxide) and got far in it but didn't manage to separate the alumnium from iron. The one who "found" (successfully separated pure aluminium) aluminium was Hans Christian Ørsted in 1824 but that is contested and he might have had potassium in his aluminium still and the actual person who made pure aluminium metal first would be Friedrich Wöhler in 1845. This is a bit problematic because Ørsted did even consider his aluminium been a find at all and when Wöhler continued his research and made his aluminium and proved Ørsteds method would have left potassium in aluminium Ørsted approved this but later Johan Fogh proved that Ørsted could have made pure aluminium because some miscalculations and happy accidents. If we want to talk about who found out that alumina has some yet undiscovered metal in it, that would be Baron de Hénouville in 1760 who was probably the first to try to reduce alumina into metal or Antoine Lavoisier in 1782 who theorized alumina is an oxide of a metal or even broader spectrum Pierre Macquer in 1758 wrote that alumina resembles metallic earth (which would be pretty much the first time aluminium was considered metal rather than salt). Unless ancient Greeks and Romans didn't figured more about alum (natural aluminium sulfate) but didn't leave written records from it and in couple almost first centuries some aluminium-containing alloys were used in China but again no written records so hard to say how far they got. But either way Sir Humphry Davy didn't manage to separate pure aluminum and so didn't find it. He just coined the other name.
×