Jump to content

Why is this PCIe Card RADIOACTIVE?

AlexTheGreatish

By using an Atomic Clock the clocks between different computers can be synced to within a dozen nanoseconds, and with that performance can sky rocket.

 

Check out the Open Compute Project: https://www.opencompute.org/

Build your own Time Card: https://github.com/opencomputeproject/Time-Appliance-Project

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

So big gaming events often involve teams from EU, NA or Asia having to fly to compete in person for zero lag LANs.

Can we get teams to play with near LAN lag with atomic clocks in each location?

Link to comment
Share on other sites

Link to post
Share on other sites

Hi, software dev here first time.

I am confused. Routing of IP packages is not fixed over WANs, the distance and nodes are not clear hence you can also not predict how long it takes for one package to arrive on the other endpoint. While in a LAN this might have some advantages (in closed networks) even there collisions can happen, causing delays or dropped packages. In other words no matter how precise your clocks are running, the road have different speed limits and you often don't know which one is used or if the road is blocked entirely. So most examples brought in the video are just not applicable, because in practice you won't ever use timestamps (oh god, that's what people did in the 80ties/90ties) but concurrency tokens in combination with CQRS.

Or what am I missing here?

Link to comment
Share on other sites

Link to post
Share on other sites

Wonder if this could be used as an anti-cheat of sorts? If timing can be guaranteed, then would it be possible to withhold the data of other players until the exact moment that it’s actually needed?

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

45 minutes ago, FakeKGB said:

If you peel it it mashes itself in the slot but with the peel on the banana does not go in the slot.

 

Side note, are bananas conductive?

Took me years to figure that out.

 

Sorta, if it's wet enough

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Zodiark1593 said:

Wonder if this could be used as an anti-cheat of sorts? If timing can be guaranteed, then would it be possible to withhold the data of other players until the exact moment that it’s actually needed?

This guarantees that the time on multiple computers is in sync, it doesn't guarantee that transfer of data is instantaneous. Not sure how this is supposed to prevent cheating?

Remember to either quote or @mention others, so they are notified of your reply

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, Zodiark1593 said:

Wonder if this could be used as an anti-cheat of sorts? If timing can be guaranteed, then would it be possible to withhold the data of other players until the exact moment that it’s actually needed?

 

13 minutes ago, Eigenvektor said:

This guarantees that the time on multiple computers is in sync, it doesn't guarantee that transfer of data is instantaneous. Not sure how this is supposed to prevent cheating?

Like Eigenvektor said it doesn't eliminate delays between computers.  If the computers have in sync clocks it would be able to verify when things occurred though.  This would make it more difficult to spoof packets, but something like an aimbot (especially the new AI ones) would have to be dealt with in more traditional ways.

Link to comment
Share on other sites

Link to post
Share on other sites

As I already stated, that's not how the internet works tho. Routing is not fixed, you never know the runtimes of packages, hence you always need additional concurrency checks on updates of data to ensure no data corruption or if there's something wrong because a package took longer than usual.

Link to comment
Share on other sites

Link to post
Share on other sites

At 48.8 billion years, the half life of Rubidium 87 isn't particularly short. So a banana is way more radioactive to say the least.

The atomic clock part doesn't even care about the radioactivity, but rather the fact that Rubidium has a convenient absorption band at 6 834 682 610.904 Hz (about 6.8 GHz)

Ie, send in a continuous RF wave at that frequency, and it will get attenuated fairly noticeably, be slightly off by a few fractions of a Hz, and the attenuation reduces noticeably. The impressive thing with these standards is how they know what direction they skirted off at, after all, the VCO running the show isn't an atomic clock in itself, just carefully balancing on a needle.

 

Though, normally one wouldn't have an atomic clock for each computer, but rather one central one and then simply distribute the reference signal to all of them. And this might sound hard, but depending on how far one has to send the signal and the quality of one's cabling it can be fairly trivial.

 

I also do have to argue against a lot of the "examples" at the end.
Gaming were honestly one of the most true examples where it would have a minor impact. (But no reason to use an atomic clock for that, considering how game loops aren't sub ms as is... A good oscillator and the occasional high accuracy ping can get your system clock to within a few µs with fairly bog standard network equipment.)

 

Using timing of packages to detect the failure of fiber repeaters on the ocean floor might sound like a cool application. Or one can just look at the phase shift between different fibers in the lump. Send a pair of packages down them and you know how far apart they normally appear on the other side, if this changes, then you know that something has changed. The time stamps of the packages themselves doesn't matter in the slightest. Or one can just bring down the fiber and light up a dark one and simply do a bit of round robbing testing them on a daily basis. And it isn't like fishing boats trawls straight through ocean fiber cables on a monthly basis as is, so they need to be pulled up and repaired regardless. (Fishing boats ripping up fiber cables isn't that super common, but it does happen every now and then. But there is a lot of fiber cables on the ocean floor, so in the grand scheme it isn't a major disruptor, but surely an interesting catch non the less.)

 

In regards to using fiber cables for gravitational waves, well. Network packages isn't that ideal as a medium. Here one would likely instead use a pure tone, ie, a single wavelength of light with no data modulated on top. (And there is such a fiber network in existence, that currently is a reference synthesized from some 400 atomic clocks all over the world, creating a time reference that is excessively accurate.)

 

Security is a hard one. Unless one owns the explicit route, then latency of a given link can vary by a good few hundred µs with ease for even relatively short distances. And this is plenty of time for some types of man in the middle attacks. (Though, good key exchange and proper encryption makes these attacks infeasible to start with. Especially if one sends the keys via a secure carrier (like on a thumb drive and simply transport it in person between the sites in need of secure communication. And yes, this is a thing that some companies, organizations and governments do, though hard drives stores more keys. (But, loose sight/control of the drive, and the keys are void, so send new ones until you succeed without ever loosing the drive out of your hands.)))

 

Streaming? Really....

"It should be possible to have much higher image quality and fewer chances of corruption occurring along the way" Just no... Basic data integrity and time stamping aren't really all that related. And a fair bit of stream corruption happens due to things outside of one's control. Like the streaming system getting overburdened for a few ms by a group of background tasks, or one's internet connection suddenly getting a flood of data down it by other devices on one's network leaving too little room for the stream to run smoothly. Or similar bandwidth issues on the ISP's end. Streaming is though suffering from fairly small buffering windows, meaning that any small interruptions in service quality will quickly provide negative results. Better time stamps doesn't solve any of that.

 

In the end, this isn't the first clock reference time stamping card I have seen. But normally one would just use an FPGA and an SMA input for some distributed reference clock. A simple 10MHz input and call it a day. After all, fairly pointless to have more than one or two (one as backup) atomic clocks for a whole building to be fair. (Though, distributing reference clocks of this nature isn't a trivial affair, but usually has some major advantages.)

Link to comment
Share on other sites

Link to post
Share on other sites

@Nystemy

Most streaming sides use TCP, so this whole argument about timestamp confused me to be honest. TCP is by definition ordered, there is no need for a timestamp. So it can never happen that a old package arrives over a new one, it would be dropped.

And yes, security made no sense - you can never reliable predict the route, speed or if a package even arrives at an endpoint, not even in most TCP/IP LAN setup. As soon as you don't have a collision free network architecture, you will end up with different transport times and even drops, even tho they are small in variation and rarer. WAN is a complete different monster, especially if you check the European backbone connections. I have packages from Vienna to UK going over the most convoluted ways inconsistent all the time.

network-europe.png.a6e39f8b293b9b9761fac75dad2cfcb5.png

 

And finally the argument of using timestamps to resolve data update conflicts is highly overrated. In fact I'd say in larger more complex service architectures with multiple systems even unpractical. Sure, you could as a last step after trying to resolve a conflict based on domain rules try to override in some cases, but you can never really go back. As soon as you have an enterprise messaging bus with more services in the domain, things get messy exponentially quickly with rollbacks. Hence most use a first come, first serve approach and just refuse out of date data processing requests.

So I really only see a solid use case for data centers, where they optimize internal data transfers and perhaps, hmm, some applications that rely heavy on high precision synchronized time, for vast majority of end users, not so much.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, MaxiTB said:

@Nystemy

Most streaming sides use TCP, so this whole argument about timestamp confused me to be honest. TCP is by definition ordered, there is no need for a timestamp. So it can never happen that a old package arrives over a new one, it would be dropped.

And yes, security made no sense - you can never reliable predict the route, speed or if a package even arrives at an endpoint, not even in most TCP/IP LAN setup. As soon as you don't have a collision free network architecture, you will end up with different transport times and even drops, even tho they are small in variation and rarer. WAN is a complete different monster, especially if you check the European backbone connections. I have packages from Vienna to UK going over the most convoluted ways inconsistent all the time.

 

And finally the argument of using timestamps to resolve data update conflicts is highly overrated. In fact I'd say in larger more complex service architectures with multiple systems even unpractical. Sure, you could as a last step after trying to resolve a conflict based on domain rules try to override in some cases, but you can never really go back. As soon as you have an enterprise messaging bus with more services in the domain, things get messy exponentially quickly with rollbacks. Hence most use a first come, first serve approach and just refuse out of date data processing requests.

It wouldn't surprise me if streaming could use UTP with some video codec that allows for the occasional dropped data block, since then one wouldn't get the latency spike associated with dropped TCP packages. One could technically also over-send packages too, as in sending the content twice. If one package gets dropped, then it isn't a major issue, since the other is still there. But this obviously uses twice the bandwidth... One could also look into more fancy data correction techniques, but this is its own can of worms...

But live streaming has the big downside that it is live, ie not having much room for buffering. And people get annoyed over 1-2 seconds of latency, and that isn't much to work with if one drops a few TCP packages in a row. Since a TCP package time out is fairly huge...

In regards to database time stamping and synchronization, there is many many ways to skin that cat. And equally many ways of not directly caring about it. If a page technically lacks that 45th comment someone made when one started reading it isn't majorly important in a lot of cases. Though, in other applications one could need much better coherency between systems, and then things do get more tricky. And sometimes the simplest solution is to just ask the database-segment owner itself for the data every single time, but this has its own downsides.

 

More accurate time stamps is nice in some applications. But far from all applications. And sometimes accuracy isn't important, resolution can in itself be the thing one actually needs for the task, if it is off by 5 seconds might not matter. But one knows that package A arrived 34 ns before package B arrived on another port on the device. (this is however just a simplified example, 5 seconds is abhorrently loose by today's standards even when exact time doesn't matter.) One can think of it as a race, if the competitor arrived 2.341 seconds after the prior or if they arrived at 2021-06-03 14:52:32.473 GMT doesn't technically matter in the race itself. One is however accurate in absolute time, while the other is just relative, in short, knowing the requirements of an application is an important aspect of designing a system.

 

I myself though explore the world of more short term data coherency in regards to CPUs leasing cache lines to each other in a multi socket system.... And there the stakes are fairly interesting. Though, I find the pros and cons of individual core clock multipliers as fairly interesting. From the standpoint of IPC (Inter Process Communication), a unified multipler has some advantages if one wants to shave off a few clock cycles of interconnect latency between cores, a niche thing to say the least.

Link to comment
Share on other sites

Link to post
Share on other sites

36 minutes ago, AlexTheGreatish said:

 

Like Eigenvektor said it doesn't eliminate delays between computers.  If the computers have in sync clocks it would be able to verify when things occurred though.  This would make it more difficult to spoof packets, but something like an aimbot (especially the new AI ones) would have to be dealt with in more traditional ways.

There's still the major issue that most server applications are designed to not trust client data for the sake of security. User-generated timestamps are generally thrown out and replaced with the time they are received by the server. Many games do have lag compensation methods, but that's calculated server-side based on sampling of network latency, and not per-event.

 

The main benefit of high precision timing comes form preventing write conflicts in multi-master transactional databases. But better solutions have existed for a long time. Sharding, for example, allows you to designate an instance as being the primary for a certain block of data, and it will receive all write requests matching that data. Revision control allows applications to prioritize updates based on various factors (time included, but not exclusively). Sometimes it's perfectly valid to reject the updates because of the conflict, and allow the client to specify which version is intended. 

 

48 minutes ago, Nystemy said:

Security is a hard one. Unless one owns the explicit route, then latency of a given link can vary by a good few hundred µs with ease for even relatively short distances. And this is plenty of time for some types of man in the middle attacks. (Though, good key exchange and proper encryption makes these attacks infeasible to start with. Especially if one sends the keys via a secure carrier (like on a thumb drive and simply transport it in person between the sites in need of secure communication. And yes, this is a thing that some companies, organizations and governments do, though hard drives stores more keys. (But, loose sight/control of the drive, and the keys are void, so send new ones until you succeed without ever loosing the drive out of your hands.)))

Also considering that receive power is already a very good indication of optical tapping. If I've received -12.2 dBm for weeks and suddenly dropped to -9 dBm, that's a good indication of a security event.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, nic_ said:

Also considering that receive power is already a very good indication of optical tapping. If I've received -12.2 dBm for weeks and suddenly dropped to -9 dBm, that's a good indication of a security event.

Yes, if one's signal power goes from -12.2 dbm up to -9 dbm, then someone just installed a nice amplifier, or the cable route just got redirected via a shorter path, a bit odd if one didn't ask for it.

And yes, signal tapping is also a concern, though port mirroring in any of the equipment in between can likewise make this type of attack valid even without a change in signal power. On the internet, one generally owns very very little of the infrastructure one uses, even if one rents a wavelength one tends to have little actual control.

Link to comment
Share on other sites

Link to post
Share on other sites

12:44 (timestamp)
"...and since again the whole project is open source anyone can implement it and change it however they want."

Open source means that the source code is readable, but it says nothing about the right permits to other users.
Are you allowed to use that code yourself, are you allowed to modify the code, to distribute the original and the modifications?

This is where the licenses come in.
Although, the license that the project comes with DOES permit those things, it's the MIT license, so that's okay, but still - a bit weirdly said.

P.S. I hope they switch to one of GPL licenses though o.o

I cannot quote people with the built-in function because I almost never enable JavaScript on websites. You won't get notified if I reply to you, sadly :/

Link to comment
Share on other sites

Link to post
Share on other sites

54 minutes ago, Nystemy said:

It wouldn't surprise me if streaming could use UTP with some video codec that allows for the occasional dropped data block, since then one wouldn't get the latency spike associated with dropped TCP packages. One could technically also over-send packages too, as in sending the content twice. If one package gets dropped, then it isn't a major issue, since the other is still there. But this obviously uses twice the bandwidth... One could also look into more fancy data correction techniques, but this is its own can of worms...

But live streaming has the big downside that it is live, ie not having much room for buffering. And people get annoyed over 1-2 seconds of latency, and that isn't much to work with if one drops a few TCP packages in a row. Since a TCP package time out is fairly huge...

I was curious and checked a few streaming platforms, because UDP is such a pain (size, ordering, reliability), it's only topped by IPX and I know that a lot games actually use just for that reason TCP which is way more inefficient.

  • Twitch: RTMP which uses TCP (I see nowhere support for RTMFP which would be UDP, but I'm no streamer, so maybe the option exist)
  • Youtube: RTMP (TCP), RTMPS (TSL, so again transactional), HLS (TSL), DASH (TSL)

So it seems that at least those two only support TCP/TSL transport protocols.

54 minutes ago, Nystemy said:

More accurate time stamps is nice in some applications. But far from all applications. And sometimes accuracy isn't important, resolution can in itself be the thing one actually needs for the task, if it is off by 5 seconds might not matter. But one knows that package A arrived 34 ns before package B arrived on another port on the device. (this is however just a simplified example, 5 seconds is abhorrently loose by today's standards even when exact time doesn't matter.) One can think of it as a race, if the competitor arrived 2.341 seconds after the prior or if they arrived at 2021-06-03 14:52:32.473 GMT doesn't technically matter in the race itself. One is however accurate in absolute time, while the other is just relative, in short, knowing the requirements of an application is an important aspect of designing a system.

 

I myself though explore the world of more short term data coherency in regards to CPUs leasing cache lines to each other in a multi socket system.... And there the stakes are fairly interesting. Though, I find the pros and cons of individual core clock multipliers as fairly interesting. From the standpoint of IPC (Inter Process Communication), a unified multipler has some advantages if one wants to shave off a few clock cycles of interconnect latency between cores, a niche thing to say the least.

Well, I grew up with PIT/RTCTSC/HPET, so yes, it would be nice to have a simple board wide version of TSC. I personally find everything that produces a lot of interrupts very wasteful and it was always a balancing act between precision and efficiency. For gaming a dedicated high precision timer would be enough, but all hardware variants usually are not only super expensive but also super power hungry. And lets be honest, as long as not everyone has one of those with a minimum set precision on their board, it's not something that can be uses by end-users (indirectly via games f.e.).

Because you addressed the race condition, yes, that happens if you try to include history in your consideration. Hence I always avoided as an enterprise architect/developer timestamps like the plague. Everyone thinks it's just an easy matter of the last one wins, but usually a lot happened in between and data is often not isolated enough and you suddenly have rewrite history and start to synchronize everything everywhere and then you have more conflicts and suddenly dead lock scenarios with states conflicting each other and your message bus gets more and more blown up blocking everything new... and in the end you realize, all that for a one-in-the-billion chance of this happening in the first place. Not worth it IMHO.

Link to comment
Share on other sites

Link to post
Share on other sites

Not that I suffer from IBS personally, but using an incurable medical condition as a comedic device is pretty scummy. Just came across as super childish.

 

Would you use something like muscular dystrophy, glaucoma or whatever else as a comedic device in your videos? No? Then don't use other ones because "LOL IBS HAS TO DO WITH POOP STUFF SO IT'S OK".

MacBook Pro 16 i9-9980HK - Radeon Pro 5500m 8GB - 32GB DDR4 - 2TB NVME

iPhone 12 Mini / Sony WH-1000XM4 / Bose Companion 20

Link to comment
Share on other sites

Link to post
Share on other sites

So this is basically a large scale timecode project.

Everyone, Creator初音ミク Hatsune Miku Google commercial.

 

 

Cameras: Main: Canon 70D - Secondary: Panasonic GX85 - Spare: Samsung ST68. - Action cams: GoPro Hero+, Akaso EK7000pro

Dead cameras: Nikion s4000, Canon XTi

 

Pc's

Spoiler

Dell optiplex 5050 (main) - i5-6500- 20GB ram -500gb samsung 970 evo  500gb WD blue HDD - dvd r/w

 

HP compaq 8300 prebuilt - Intel i5-3470 - 8GB ram - 500GB HDD - bluray drive

 

old windows 7 gaming desktop - Intel i5 2400 - lenovo CIH61M V:1.0 - 4GB ram - 1TB HDD - dual DVD r/w

 

main laptop acer e5 15 - Intel i3 7th gen - 16GB ram - 1TB HDD - dvd drive                                                                     

 

school laptop lenovo 300e chromebook 2nd gen - Intel celeron - 4GB ram - 32GB SSD 

 

audio mac- 2017 apple macbook air A1466 EMC 3178

Any questions? pm me.

#Muricaparrotgang                                                                                   

 

Link to comment
Share on other sites

Link to post
Share on other sites

First post because my eyebrows were raised the whole time for this video.

Glad to see the forum thread has the real info.

Watching the video just had me going 'uhhhhh that's not how it works' the whole time.

Having worked on one of the highest throughput, 24/7 used systems in the world (that relies on a database), we don't care much at all about time sync. Maybe if you're a few minutes off, but that's just because security. We do care about network latency (just from a speed standpoint), but not what the machines themselves think the time is (within reason). *Maybe* strongly consistent writes could be better, but reads? Social networks don't need strongly consistent reads, so any reasonable engineer would use eventually consistent reads which don't care what time it is.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, ked913 said:

Buddy I work on FPGAs, SmartNICs, PTP I know how much a Vivado license costs to compile FPGA images.  I know what the hardware costs.

 

I know what is published on the github. Show me the .v .vhd .vhdl files, ie the firmware image for the Xilinx Artix 7 XC7A100T which is the core part of the hardware.

 

Open Hardware, closed proprietary firmware is NOT open.

 

If it's open, build it from scratch. Prove it.

He pointed out that he clearly didn't "skim" from hackernews, and your response is... unrelated? Doesn't even mention what he said? Aside from 'buddy'?

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×