Jump to content

Thaldor

Member
  • Posts

    1,288
  • Joined

  • Last visited

Awards

This user doesn't have any awards

2 Followers

Profile Information

  • Gender
    Male
  • Location
    Finland
  • Occupation
    XR developer

System

  • CPU
    AMD Ryzen 7 3700X
  • Motherboard
    Asus ROG Strix X570-E
  • RAM
    2x16GB HyperX Fury @3600Mhz CL16
  • GPU
    ASUS RTX 3060 TUF Gaming
  • Case
    Nanoxia Deep Silence 3
  • Storage
    Samsung HD502IJ 500GB (C-drive since 2010) + Samsung 850 EVO 500GB + 2x 4TB Toshiba X300 + Samsung Evo 970 Plus 1TB
  • PSU
    Corsair AX860i
  • Display(s)
    2x Asus VG27AQ + Quest 3
  • Cooling
    Noctua NH-D15
  • Keyboard
    Logitech G815
  • Mouse
    Logitech G903
  • Sound
    Creative Sound Blaster Z
    AXE I/O One
    Yeti X
    Yamaha A-670 with old Philips speakers / Beyerdynamic DT 900 Pro X
  • Operating System
    Windows 11 64-bit. Pro

Recent Profile Visitors

2,553 profile views
  1. Let me get this right. What you want is to have single audio output from multiple computers, you want to hear audio from multiple sources through single headphones without latency? In that case, get a mixer. That's what all the streamers do, they use mixer to get monitoring output from the PC/console/whatever, their microphone and (possibly) their streaming PC and then they send the mixer output back to the streaming machine, add the required latency to the video to catch the audio latency coming from the mixer and stream the whole package. As in: MAC -> [Mixer line in 1] PC (-> DAC, optional) -> [Mixer line in 2] [Microphone/whatever] -> [Mixer input 3] [Mixer monitoring/output 2] -> amp -> headphones [Mixer main output] (-> ADC, optional) -> PC -> Stream Also why are you running your Mac into the both inputs of the M-Track? You know the first input (with XLR/TRS combojack) does 2-channel input already? At least it is XLR/TRS combojack so I would presume it does stereo input. Well, either way, you will always have latency in the M-track input and whatever PC output unless you use ASIO to eliminate the Windows sound mixer completely (this is what ASIO does, it purges the whole Windows audio stack and leaves it to the USB interface and the software to figure it all out) and use the M-Track for output also. If your DAC supports ASIO (which Shiit doesn't because audiophile stuff and reasons) you could use ASIO4All to control the ASIO drivers and reroute things (but as Schiit doesn't make ASIO drivers for their stuff, this doesn't matter). You can try to pull double duty on the M-Track if you don't need stereo output from your desktop (as in you just need notification sounds and whatever from the desktop to appear mixed with the Mac output) by connecting your desktop to the M-tracks input 2 (mono, the sole TS jack), set the Windows output to the M-track and then use ASIO side of M-track to pull both inputs to OBS (or whatever) and set ASIO output back to the M-track and connect your amp to the RCA output of the M-track. So your signal path would be: MAC -> [3.5mm TRS to 6.35mm TRS] -> M-Track input 1 (the combojack, Phantom power off!) desktop -> [something that will turn TRS into single TS, so stereo to mono] -> M-Track input 2 (set to line instead of instrument (the guitar)) M-Track (ASIO) USB -> desktop for OBS/whatever software with ASIO support -> M-Track (ASIO) RCA output (or the headphone out) -> amp -> headphones If you need to use the DAC, it goes between the desktop and the something that will turn the TRS to TS for the M-track input 2. With output you have a choice, you can either plug your amp into the RCA outputs of the M-Track and listen the final output which will have some minimal latency (with ASIO and no heavy effects and processing few milliseconds) OR you can plug your amp into the headphone output of the M-Track and monitor the inputs completely without latency (the headphone switch on the M-Track to "Direct"). For streaming or whatever you need to split the OBS output to some 3rd output (virtual or whatever) that will be the streaming audio output that is SEPARATE from the "default" Windows audio output (which is connected to the M-Track input 2 as mono). TL;DR: Just get a mixer and connect all your machines outputting noise to it, one output from it to the headphones and second for streaming.
  2. Add this to the list of things to think before pouring your hard earn money on Nintendo products.
  3. The couple monitors I have seen with FRC have been like that. I must say I haven't dug enough there since I decided to go directly native 10 bit supports so didn't need to look deeper. But I would guess it's about that the panel itself can do max. 170Hz so the FRC must also fit into that, which means the "refresh rate" of 10 bit content must be slower than 170Hz to make time to fit that flickering between two colors within the refresh rate of the panel. Much easier to explain what I mean with 3D TV's and glasses. The 3D content requires that per one frame of the content the TV must show 1 frame per eye to create the 3D effect, this means if the TV has a panel that has 120Hz refresh rate (120 frames per second) it can only do 60 "fps" 3D content, it is still doing 120Hz/fps but the content must run half of that because the frames are divided between eyes. Same with FRC, the FRC must flicker two 8 bit colors to recreate a 10 bit color, this will eat the rate at which the monitor can show content because it needs to show more frames per frame of the content to recreate the 10 bit colors. The system exactly doesn't know the monitor uses FRC so the monitor must slow down the rate at which the system sends content so the monitor has time to show the content with FRC, so it just tells the system the refresh rate is lower.
  4. Because your monitor only supports 8 bit + FRC, so it's either "10 bit" at 120Hz OR 8 bit at 170Hz. Directly from the ViewSonic There isn't any loop around that because the monitor itself is acting as HDR monitor and taking that 120Hz 10 bit signal and then changing it to 8 bit + FRC at that 120Hz, which is probably close to normal 8 bit at 170Hz because FRC is basicly just flickering two 8 bit colors to mimic 10 bit color.
  5. I don't know what is included in the media bargaining code but depending what it is these are my opinions: 1) The scrutinized part is including news site's own summary of the article with the link. In that case I am fully fine the social media must buy licensing that content because that is someone elses content they are using. 2) Including platform written summary of the article with the link. This is kind of asshole move but greyarea especially when then news media starts to be so damn awful with their clickbait the YouTubers seem to be not misleading with their thumbnails and video titles. I would probably directly move the linking fees to the users and make the users pay for that all because seriously, I would pay for that service so I can go fast through world events without clickbait garbage that the modern news media likes to sow. 3) It's not the summary, just that the social media platform links to the article in any form (be it the page title, article title or just the plain URL). News media got completely greedy and they should just crawl under a rock and die. Social media platform should just right out delete every post linking to those sites and make getting to those news sites as hard as possible (no hyperlinks in any form, if user wants to it's typing the URL, copy-paste or through external search engine) and as a cherry on the top, ban the news medias from the platforms (they didn't want their content to be linked on the platform so why would they want to be on the platform in the first place?).
  6. If you are so much against the power consumption, realistically you can't do much else than have it collecting dust or take the $500 and sell it. Running it for Steam Link to stream games to mobile devices is pretty stupid if you have better PC and most likely if you're going to be streaming a game to a mobile device, you won't be using your PC (unless rendering or compiling or something) so you could just as well keep it running and stream from it. Using it for renderfarm or similar is also kind of stupid if you really don't need two PC rendering farm. Using it as NAS/media PC is bit stupid as it's way overkill (and the power consumption). You probably don't have 3D printer so having dedicated PC for that is out of the picture. HTPC is quite niche these days with Steam Link and AndroidTV's everywhere. You could use it to host dedicated servers for games but that would require that you play games that use player hosted servers. You could always nail the parts to a wall to make an art piece out of them if selling them or just holding on to them seems waste. (Just saying)
  7. Nintendo doing Nintendo stuff, you reap what you sow (that goes both ways, going against Nintendo and people supporting Nintendo by buying the overpriced and kneecapped Nvidia Shield (originally K1)). Ars Technica breaks the thing down a bit more for those interested. Basicly Nintendo is doing what Nintendo does, fights windmills with Mario money, going after decades of rulings favoring the "Bring your own mallet" emulation, testing the DMCA's ban on breaking "developed" copyprotections, arguing that even PHYSICAL games are only licenses to use that one single copy, downplaying homebrewing (which I kind of understand because like how many actual homebrew games have been released? There's mostly one of each necessity programs like media player and browser and if they have huge flaws then a second one, leaving emulators out of this one for "you know" reason), accessibility "Yuzu would need to point to some ACTUAL example of accessibility" and mostly Nintendo seeking to frighten off people from emulating the overpriced and kneecapped Nvidia Shield (originally K1).
  8. The thing is in the word "reference". The point isn't to have median value, the point is to have exact and set value that will be globally the same with every other device required to show that same value. Instead of calculating median flight path from the flights, it's a made ideal flight path to which every flight is compared to. Except in case of image and sound the ideal happens to be pretty boring and bland because humans are odd and like a bit more saturation and contrast to make the image more captivating, bit more bass and treble to make the music sound more alive and bigger and so if the image/music is good on the reference gear, it will be great on the consumer gear.
  9. I have to say now that I have taken free deep dive into the premium quality Tidal, that there is a difference. They have some very bad mixes. That can be also my taste in music but, seriously, holy fucking shit where the fuck they have gained so flat mixes that I have heard more dynamic planks in hardware stores? I had to go through and I even disabled any post-software EQ's and all that to find out was it just something in my signal chain that makes Tool sound like wooden hammer with as much balls as castrated cat and Deep Purple sounded like someone recorded them from C-tapes, even my self ripped LP rips from my fathers collection somewhere around 90's to early 00's with garbage level mp3 had more dynamic range than Tidals whatever mix they found from the Goodwill. And I don't even want to talk more about Rainbow... Stargazer's into snare is dead on Tidal mix... And I have some really high quality rips and some absolute garbage rips from the Rising album but none managed to turn "tshhhh" into "tsk" (even the Deluxe Edition both mixes lack the lower spectrum compared to the SHM-CD from which the "New York Mix" should be from, those are way better mixes than the non-Deluxe ones on Tidal but I have no idea where they got that bad mixes). And I am not talking about something that would come from fileformats or bitrates (while those can affect but not this much). The low level rumble is just gone especially from Tool, in Jambi it's like Tidal mix was afraid to break something by really driving that bass and before someone says I have higher quality rips from my CDs and CDs have higher quality audio, no, Tidal should have "superior dynamics" because 24-bit and 96kHz compared to my 16-bit 44.1kHz FLACs but no, stuff like the Schism's 1:20 bridge is just not there and 2:00 mark sounds like someone booked Tool to play in a fucking school disco because the soundstage sounds that small. I wasn't sure was I fucked up or just added some EQ to my rips and started to strictly compare Tidal to Spotify (without any EQ) and there was the same thing, Tidal mixes were flat. Spotify mixes were as far as I can tell identical to my rips. Also how fucking stupid must someone be to fall for the "Max/MQA" garbage? They don't even fucking hide it that it's just some mumbo jumbo garbage, they literally say directly "Max/MQA up to 24-bit 192kHz", just to make it even clearer: "UP TO" No, they aren't saying the "Max/MQA" quality is something superior from "Max/FLAC" or whatever High or anything. It's just probably up to somewhere there or not even close to but fuck they tell you. Same garbage scamming as I would sell my car as "You know Toyota Yaris can go over 300km/h on dirt roads?" and in reality, yes, Toyota GR Yaris (Rally2) tuned and made for WRC can fucking rip the dirt road, but my Toyota Yaris 1.0L can hardly get to 150km/h downhill on highway, but hey "Toyota can go over 300km/h", I am not wrong but I am misleading you like a pig to the slaughter house. They're not afraid to tell the bitdepths and sample rates for FLAC recordings but MQA and you get basicly no information unless you rip the songs and look at it by yourself at which point they would probably pull the normal audiophile scamming argument "it's not our system, it's yours" even if you had airtight proof that it's non-altered file from their servers. And when it comes to audiophile stuff. Always remember that most of the Beatles albums were rough mixed with the Beatles in studio that only operated in mono and everytime you hear stereo mix of the Beatles, it's completely mixed by some audio technician as "extra mile" without the Beatles giving two fucks about it. Back then and earlier stereo systems were so rare that pretty much no one had them and what little stereo mixes were done were mostly just to market that the studio can do it, not that the music sounded any better, just that the studio had the capability to do it. That's an example what goes in the other side, unless you have insider knowledge, you will have pretty much zero idea how the thing was supposed to sound and the best you can do is to listen it however you think it sounds the best. So, go crazy and experiment, add bass, twist the V, play with the reverb and remember that there's a reason why it's "studio monitors" and "high-end HiFi" separately.
  10. One magic word: Steaming. Carrots, green beans, broccoli, asparagus, sprouts, potatoes, pretty much whatever you fancy chopped in uniform size (nice big pieces, baby carrots are pretty much the minimum size you would like because you want that steam to get everywhere and move freely), steam for 5-15 minutes, add some olive oil, pinch of salt, freshly ground black pepper and if you want some herbs and lemon juice (butter and honey are also excellent). I would recommend investing into layered pot so you can have good amount of water without needing to care about adding more and couple layers of steaming pots so you can add hard stuff like carrots, potatoes and cauliflowers early and just add the next layer with green beans, broccoli and other soft stuff in the middle (you don't want them too soggy). But you can just use something like metal strainer in deep pot to keep the veggies off from the water (you don't want to boil them, so not in the water, on top of the water in the hot steam). You can also just use microwave safe bowl, put veggies and half-deciliter of water in it, close tightly with lid or plastic wrap and throw in microwave on high for like 5 minutes, check if good and if not add more water and repeat. The best part is, you cannot fail. Try them with a fork and if too hard, let them sit couple minutes more, if they get too soggy, that's the worst, not nice to eat but don't worry, you cannot burn the veggies so just throw them into blender or mash them into purée and they are saved.
  11. LTT isn't tech support, especially for people with actual needs and problems and not want to be janky AF and fiddle around everything all the time. I am not saying LMG didn't have people who know what they do and have expertise and wisdom in things they do. It's more just that LTT viewers probably don't want to watch 15 minute recap of setting up good old solid storage that doesn't have any glitter, ADHD, brand spanking new features and doesn't boast the newest and most powerful hardware possible. Building rocksolid editing rig that isn't boasting halo products and doesn't have some "wow"-thing in it and installing normal NAS just isn't LTT video and I presume staff knows it and won't be doing that kind of video but would stuff it like thanksgiving turkey with that kind of stuff that would lead to janky as fuck shit that "just doesn't work" but requires support and care afterwards more than people would give and even could give. Like imagine some (imaginary) Top Fuel dragster channel making video about fixing someones work truck, could they do it? Totally but the guy probably won't be very happy with his truck now running only with nitromethane and being incapable to drive longer than a minute and requiring engine rebuild after every couple of drives and the viewers wouldn't like to watch them rebuild just everyday diesel engine without it being at least rigged with enough horsepower to do drag racing.
  12. There's a ton of Creative marketing speech. The SBX (Pro Studio) Surround is just effect, just like the Crystalizer, Bass and all that under theere in the software, the slider controls how strong the effect is which would be kind of stupid if you had actual surround sound (like slider to turn your 5/7.1 system into 2.1 system). The Surround Virtualization works for headphones and still does the same as it's earlier iterations have done. If you activate it and check Windows audio settings, you will see it has changed your settings to 7.1 speakers (if you already didn't have) and basicly takes that 7.1 sound and does the same as the DTS/Dolby Atmos does and mixes it to 2 channel audio. It's basicly the same but further down the signal path so it doesn't receive the spatial audio data (the sound source position compared to the listener) but just the speaker output and down mixes that for headphones. I say "basicly" because it's a bit more complicated, like level of simulating human head in a room with the audio source and calculating the differences between volume and timing ears hearing the sound and all that. Big difference here is that spatial systems calculate what listener HEARS while old surround system plays the sound from the SOURCE (more/less clearly; Surround system tries to play the gunshot where the gun is and your hearing can then deal with the rest while the spatial system calculates how the character you play would hear the gunshot). When it comes to games and softwares and those Windows spatial sound options it's more of that if games support actual spatial audio, they send the DTS/Atmos/whatever the object positional data (where the sound source is) and the software does the processing, while with games that don't support them they fall down to the line with that Creative Surround Virtualization and just take the surround sound that the game gives out. TL;DR: The Surround Virtualization is working system for headphones but just know that if you put DTS/Dolby Atmos/whatever spatial audio system on, you may have two similar systems on top of each other and both work a bit differently that may cause problems.
  13. Just to clear one thing, the SBX Surround is just spreader, not virtual surround. As in, SBX surround is an effect that spreads whatever sound sounding like it would be surround, it has zero idea about spatial audio or anything relating to position of the sound. It's same as some old stereo spreaders that took 1 audio channel in, used some high and low pass filters to create 2 a bit different audio streams and gave 2 channels out, it made mono mix sound a bit more like stereo mix but in no case turned mono mix into stereo mix. DTS and Atmos, at least when it comes to their Windows encoders, are more down mixers, they take in the actual surround mix of the audio and down mix it to stereo retaining the positional "information" (as in adding delays and stuff to trick your brain to think sound comes from certain direction, as in sound that is coming from the back right channel gets tiny bit lower volume and is played 0.something milliseconds earlier on right channel than on left channel with even more lowered volume, and yes, our hearing is that stupid in reality). They're basicly turning surround sound into the ages old virtual barber demo.
  14. Both big ones are high-power connectors (3-pin 1-phase 200-240V 16/20A AC, 5-pin 3-phase 300-480V 16/20A AC) and the XLR is often audio connector and DMX512 (or XLR5) is often used in stage lighting, only thing that is similar with them is the amount of pins. You can guess what would happen to the vocalist if instead of the +48V DC Phantom power the mic received 240V AC, that's pretty much same as your phone getting the +48V Phantom power instead of some leak power (milli/micro volts) through the combo-jack. My best guesses are either that's some Cannon connector made into a grounding bar or used as system to not loose your important poking thing, that someone might even mistake for a key. Also to get your daily dose of cursed tech, just head to r/spicypillows, not that the tech is cursed but there's some really cursed pictures right there.
  15. Favorite? But for real products, the 3.5mm TRS to dual-XLR adapters. They're fine if you really know what you're doing but once you connect your phone to the +48V mic port for AUX input, oh boy, are you going to have "fun" time.
×