Jump to content
Floatplane payments are migrating! Read more... ×
Search In
  • More options...
Find results that contain...
Find results in...

Blogs

 

LG Mobile doesn't seem to be interested in selling phones if it repeats the V30’s launch

If you've been following the news section of the LTT Forums, you would've noticed that the LG G7 was launched. And it does look like a solid, if slightly underwhelming device. But there's one issue; lack of information on availability.    No, see, this is a problem that's been obvious over the past few LG devices. The G5 took a long while to reach the market and some of its modules ended up not making it to North America, the V20 suffered from lack of availability info nearly a month after launch, the G6 suffered from that regional crap and the V30 suffered from the same shitty launch that plagued the V20.    One thing that I've consistently seen from phone launches from other companies is that they have a time-line of when the device is available for purchase, whether that's an exact date or a rough time frame. LG has NOTHING. Not even a quarter. Nothing at all.    And this is a pattern that continues even today. The G7 has no launch and pricing info, and if LG doesn't come out with them before the month is over, the G7 will already be dead on arrival. I keep saying this but it really seems that LG is entirely oblivious to their situation. Hype means nothing if the product can't be obtained by consumers. LG needs to understand that or risk being much more irrelevant    UPDATE: More info has come out on launch, though curiously not from LG. The launch date seems to be June 1st, as confirmed by many carriers, with a South Korean launch happening slightly earlier. That’s later than expected and I wished LG would have said so earlier, but I guess it’s better than nothing?

D13H4RD

D13H4RD

 

Which is better for gaming: Windows 7 or Windows 10? A comparison of benefits.

My experience while dual-booting both Windows 7 and Windows 10 is that more games run on Windows 7, and games run better on Windows 7.   1. Windows 10 has DirectX 12, but Windows 7 has Vulkan, which does the same thing as DirectX 12. And Vulkan seems to be the favourite between the two by current developers, due to its cross-platform abilities and open-source nature. The only games that you'll miss out on by not having Windows 10 and DirectX12 are Microsoft exclusive games (as Microsoft wants to force people to Windows 10 by withholding their games from other markets).   And there ends the only semi / potential benefit of gaming on Windows 10 that I'm aware of.   2. If you play old games, then Windows 7 natively had better compatibility for them. Some games I haven't even been able to get to run in Windows 10. I can't immediately think of the list off the top of my head, but the MechWarrior 4 series is one of the recent ones, and there have been others, too. Granted, MechWarrior 4 Mercenaries still takes some tweaking in Windows 7 to get it running, but at least I was able to get it running.   3. Another point is that, though many older games can be played in Windows 7 and 10 with some tweaking, they might take more tweaking to get working in Windows 10 than in Windows 7. There are some compatibility features that are just present in Windows 7 which are either turned off by default in Windows 10, or are missing and need to be added manually. This can included absent system DLL files, or something like .NET Framework 3.5 for legacy game support which needs to be manually enabled in Windows 10.   4. While some people mention that FPS between Windows 7 and Windows 10 should be mostly similar (some more in one OS, and some more in the other), there are some definite and even serious caveats to that:   - Due to Windows 10's ironically-titled * Game Mode which is turned on by default, Windows 10 seems to lose some FPS in a lot of situations unless that Game Mode is disabled.   - If you play Ubisoft games, they get a chunk more FPS in Windows 7 than they do in Windows 10. Sure, Ubisoft is just bad at PC coding, or whatever else you want to explain it as, but the fact is, Ubisoft games get a chunk more FPS in Windows 7 than they do in Windows 10.   5. Also, Windows 10 interrupts gaming sessions with background downloading, updates, system resets, and the general diminished stability that Windows 10 has compared to Windows 7.   6. Lastly, there are mounds more community guides, fixes, and tweaks designed for older games running in Windows 7 than there are for older games running in Windows 10.     In my view, without a doubt, Windows 7 is more stable, more reliable, and more compatible with a wider range of games than Windows 10 is. And Windows 7 is at least on-par with Windows 10 in the area of FPS. If you just want a rock-solid gaming rig that can do just about everything while not interrupting or obstructing your experiences, then I think that Windows 7 is the way to go, as I think that Windows 10 is a compromised experience both in and out of games.       Update: The below video is not representative of newer (maybe 1803 onwards?) versions of Windows 10, which generally show parity in performance between having Game Mode on or off.

* LTT made a video comparing Windows 10 running games in Game Mode to running games with Game Mode turned off:  

Delicieuxz

Delicieuxz

 

About that Task Manager CPU utilization "being wrong" (and about idling)

Note: I posted this as a status update, but it got long enough that I wanted to preserve it as a blog.   This popped up in my news feed: Netflix's senior software architect says Windows' CPU utilization meter is wrong. He has some good points in that it's not measuring the time a thread is actually doing useful work, rather than waiting on something, like data from RAM. Which he also points out that there is a gap in performance between RAM and CPU, but that's been a known problem for the past 30+ years.   In any case, while I like what he presents, I don't think Task Manager's CPU utilization is wrong, just misleading. All Task Manager's CPU utilization graph is measuring is the percentage of time in the sampling period (usually 1 second) a logical processor spent running the System Idle process. Nothing more.   The other thing is Windows can't tell if a thread is doing useful work or not unless it somehow observes what the thread is doing. Except the problem there is that requires CPU time. So it has to interrupt a thread (not necessarily the one its observing) to do this. And how often do you observe what a thread is doing? Adding this feature just to get a more accurate representation of CPU utilization is likely going to decrease general performance due to overhead.   If anything, I think it should be the app's responsibility to go "hey OS, I can't do anything else at the moment, so I'm going to sleep" when it actually can't do any more useful work. But a lot of developers like to think their app is the most important app ever and that any time they get on the CPU is precious, so they'll use up all of the time slice they can get.   Backup: About the System Idle Process Just so people are informed about the System Idle Process (taken from https://en.wikipedia.org/wiki/System_Idle_Process): Now you might go "but why bother with an idle process at all?" There's a few reasons described in https://embeddedgurus.com/stack-overflow/2013/04/idling-along/. A few of these are: "Petting the watchdog", a watchdog is a hardware timer that if it overflows, resets the system. This is for reliability reasons in case the system hangs. This action resets the watchdog timer. Power saving: If the CPU is running the idle process, you know the CPU is doing nothing. Depending on how aggressive you want the power saving to be, you can have it sleep the CPU as soon as it runs the idle state or after other conditions have been met Debug/logging tasks (Not mentioned in the article, but in a comment): if your tasks are time sensitive, you may not want them to dump stuff for logging or debugging since those specific tasks may not be predictable in the amount of time they spend. So you shove it to the idle task.

Mira Yurizaki

Mira Yurizaki

 

The Software Enigma Machine blog Pt. 4: The GUI

Part 4 in the making of the Software Enigma Machine. This last part deals with the graphical interface.   The Outline A recap on the outline. Part 1 What is the Enigma Machine? Why did I choose to implement the Enigma machine? Before Programming Begins: Understanding the theory of the Enigma Machine Finding a programming environment Part 2 Programming the features Rotors Rotor housing Part 3 Plug Board Part 4 GUI If you'd like to look at the source code, it's uploaded on GitHub.   Programming the GUI With Visual Studio, designing the GUI was pretty easy. Originally it looked like this:     The rotors were chosen as numeric spinner to both be an input and an output. An issue I was facing was at first I wanted the rotor position control to update the rotors internally if the value changed. This worked fine, but when I wanted to change their positions, I wasn't sure if I actually modified them or not (the "Set rotors" button originally didn't exist). So to fix this, I use the value change event handler to mark if the value is different from the rotor's position, but not update the rotor positions themselves. The user has to press the "Set Rotors" button to set the rotors and clear the highlighting. That ensures the user knows the rotors on the GUI report the correct position.   The "Seed" spinner is to set the randomization. If this changes, it creates a new set of rotors, and the rotors are tied to the seed. This helps increase the entropy (or randomness) of the system. If I did my math right, in theory there should be 637,876,516,893,491,200 possible combinations between the rotor settings (456,976 possible combinations), their wiring (650 permutations), and the random seed (2^32)   This version of the GUI is the "simple" version, in that you feed something in the Input textbox and the app will spit out the "scrambled" message in the Output textbox. To "decode" a message, the scrambled output is fed in as the input and the rotors are set back to the original position.   I knew this wouldn't really be a good Enigma Machine app if it didn't have the keyboard, lights, and plug board so that was next. The final design ended up having a tabbed interface for switching between the simple and the full versions:     The top section is the lamps for the letters, the middle for buttons, and the bottom for the plug board. The plug board also has three buttons for shuffling the wiring (produces a random output every time without affecting the RNG seed), resets the wiring to the first randomized position, or clears the board so the letters map to themselves.   So the biggest problem I had with making this GUI was while I'm sure there's a way to programmatically add elements, I still placed them all piece by piece (well, with some copy and paste for good measure). And then I had to rename them all with useful sounding names. But here came the worst part: how do I handle all of the necessary event handlers? I needed a handler for all of the keyboard key presses and one when one of the plug board letter mapping changes.   In Visual Studio, if you double click on a GUI element, it creates a default event handler for you for the most common action that would happen. So if you do this to a button, it will create an event handler for clicking on the button. Sure, I could do this for every button, and I can do this for every one of the plug board drop down menus. But here's the problem, now I have 56 handlers to edit. And 26 each have basically the same code, just using a different parameter. This is a classic case of repeating myself which is against the DRY principle (Don't Repeat Yourself). The biggest reason why this is a problem is if I need to change what an event handler does, I have go back and change it for all of them. That's error prone and gets old fast.   There was also a related situation on how I would light up a lamp. I could have something like this: string outputLetter = machine.ScrambleText(Input); switch(outputLetter){ case 'A': ALampLabel.BackColor = Color.Yellow; break; case 'B': BLampLabel.BackColor = Color.Yellow; break; ... case 'Z': ZLampLabel.BackColor = Color.Yellow; break; } Despite that I'm manipulating a different object in each case, this is still an example of violating the DRY principle. What if I wanted the lamp to show a different color for a scrambled letter? Now I have to change 26 of them.   At this point I realize C# Lists don't care what data type they store. They can store a collection of any data type. A programmatic solution can be done by having a collection of Labels, Buttons, and ComboBox types: Create a new List of the appropriate class type Use the List.Add method to add the objects. The caveat is that they have to be added in the correct order. This is where smart naming of the objects come into play. Since the letter mapping I feed the rotor housing and plug board is the English alphabet, I start with whatever represents A, then B, etc. At the end, if the elements need to be initialized to something, I use an iterator over the List and add what I need. For the buttons and drop down menus, they needed an event handler to handle the a click and a value change respectively. Fortunately, the way they handle the event is the same. For example, key presses just need to do this: keyLetter = plugBoard.GetRewiredLetter(keyLetter); encodeKeyPress(keyLetter); But where does keyLetter come from? I found out that every GUI element object has its own name as a property. So even though in code the object for say the keyboard key A is called AKeyboardButton, if I invoke AKeyboardButton.Name, it gives me "AKeyboardButton." And since the way I named the elements are the same, their schema is such that I pluck the first letter out and get the letter the key represents. This changes the code to: string keyLetter = keyButton.Name.Substring(0, 1); keyLetter = plugBoard.GetRewiredLetter(keyLetter); encodeKeyPress(keyLetter); Oh, but where did keyButton come from? That was taken from a parameter passed in the event handler. Event handlers have a parameter called sender of a generic object type. Casting sender to the appropriate object type (in this case, Button) figuratively turns the data type from a generic object type to Button. Now I can use sender as if it were the button I pressed. So the code is now: Button keyButton = ((Button)sender); string keyLetter = keyButton.Name.Substring(0, 1); keyLetter = plugBoard.GetRewiredLetter(keyLetter); encodeKeyPress(keyLetter); Using this technique probably saved not only a ton of lines of code, but a ton of headache needing to go back to change up anything should I needed to update how an event handler works. And I've already updated the event handlers a few times.

Mira Yurizaki

Mira Yurizaki

 

The Software Enigma Machine blog Pt. 3: The Plug board

Part 3 in the making of the Software Enigma Machine. This time, talking about the plug board.   The Outline A recap on the outline. Part 1 What is the Enigma Machine? Why did I choose to implement the Enigma machine? Before Programming Begins: Understanding the theory of the Enigma Machine Finding a programming environment Part 2 Programming the features Rotors Rotor housing Part 3 Plug Board Part 4 GUI If you'd like to look at the source code, it's uploaded on GitHub.   The Plug Board At first I thought I could reuse the rotor code, but there was a snag with this: the user can change the mapping at will.  And not only that, but the mapping needs to be reflective. That is if inputting A results in outputting G, then inputting G results in outputting A. So the first data structure I thought of that could store this mapping is a dictionary, where the keys and values are string types. The dictionary would be initialized to take in the list of letters or characters and just have them map to themselves. public PlugBoard(int Seed, List<string> CharList) { mapping = new Dictionary<string, string>(); rng = new Random(Seed); foreach (string entry in CharList) mapping.Add(entry, entry); } The RNG seed is needed for the shuffle function, which I'll talk about later.   At first I thought the remapping would be easy: public void ChangeWiring(string Input, string NewOutput) { mapping[Input] = NewOutput; mapping[NewOutput] = Input; } But this presents a problem. Say for instance we have this mapping:
  A => G B => S G => A S => B   I want to map A to B. But taking that code as is, would actually result in this mapping.   A => B B => A G => A S => B Oops. I didn't account for the fact that A and B still had something connected to it. So after stewing on this for a while, maybe thinking of some clever swap trick, I decided on this: don't assume the user wanted to remap the other side of Input and NewOutput. That is, those letters should be "disconnected" first. In this case, G and S should be disconnected and mapped themselves, then we can map A and B together. So the code becomes this: public void ChangeWiring(string Input, string NewOutput) { string oldOutput = mapping[Input]; mapping[oldOutput] = oldOutput; oldOutput = mapping[NewOutput]; mapping[oldOutput] = oldOutput; mapping[Input] = NewOutput; mapping[NewOutput] = Input; } Another feature I wanted is to have the app shuffle the wiring since: The user may not want to manually input what combinations they want on the plugboard It allows the app to automatically rewire the board in the event of encoding a text string, rather than having the user key in all of the text. The first instinct way of doing this would be to randomly generate two numbers of which indexes to change the wiring. Except... this is a dictionary where the key is a string. So I can't use an integer. And since I can't assume what the list of keys looks like, I can't use this integer to generate a character. However, you can iterate through the keys using the foreach loop. So I combined the two ideas so to speak: Generate a random number that's between 0 and the number of entries in the dictionary. Use a foreach loop to iterate through the letters. A counter counts up for each iteration. When the counter equals the random number, use the letter picked from the iteration and break the foreach loop. Do this twice, and the first and second letters to be wired together are picked. It looks hokey, but it works: public void ShuffleWiring() { int entries = mapping.Count; Dictionary<string, string>.KeyCollection keys = mapping.Keys; for(int i = 0; i < entries; i++) { int randEntry = rng.Next(entries + 1); int counter = 0; string firstLetter = ""; string secondLetter = ""; foreach (string randomKey in keys) { firstLetter = randomKey; counter++; if (counter == randEntry) break; } randEntry = (rng.Next() % entries); counter = 0; foreach (string randomKey in keys) { secondLetter = randomKey; counter++; if (counter == randEntry) break; } ChangeWiring(firstLetter, secondLetter); } } The only other plug board behavior needed is to get the mapped letter. However, since the valid inputs of the board are arbitrary, this method makes sure the input is a key in the dictionary. If it is, then it returns the remapped letter. Otherwise it returns the input as-is. public string GetRewiredLetter(string Input) { Input = Input.ToUpper(); if (mapping.ContainsKey(Input)) return mapping[Input]; return Input; } > On to Part 4

Mira Yurizaki

Mira Yurizaki

 

Expanded details regarding Craig Murray alleging Facebook censorship

Here is the expanded-details OP text for the thread: Facebook is stealthily blocking / hiding posts and post-shares featuring verified information inconvenient to US / UK propaganda     Blocked By Facebook and the Vulnerability of New Media     The person this information comes from, Craig Murray, is a former UK ambassador to Uzbekistan, who is probably most known for whistle-blowing US, UK, and Uzbekistan partnership in torturing people in 2005, and also for delivering the leaked DNC emails to WikiLeaks during the 2016 US federal election. Recently, Craig Murray exposed a series of lies the UK government made about a chemical attack that occurred in Salisbury, UK, which resulted in the UK government backtracking on many of its former claims, and denying having made some of them despite its video interviews and social media posts (which the UK government started deleting before that too was called out) proving otherwise.   For more detailed coverage of the Craig Murray's recent activity which has likely led to Facebook censoring his content, see my LTT blog post on the subject.   This information comes from Craig Murray, who is a former UK ambassador to Uzbekistan. Craig has been in the news some recently because of his blasting apart many of the lies the UK government was pushing about the Salisbury, UK chemical attack. Craig Murray used to work in the UK's Foreign & Commonwealth Office, which communicates with the UK's Porton Down chemical weapons facility that analyzed samples of the Salisbury chemical agent and reported its findings to the UK government.   The UK government had tried to propagandize the public against Russia by claiming that its Porton Down chemical facility had verified the Salisbury chemical agent to have come from Russia. But because Craig Murray still has contacts within the FCO, and because he knew by first-hand experience from his time as a UK diplomat the type of manipulation and deception that goes on behind the scenes with government narratives that aim to bias public opinion towards or against an objective, he reached out to his FCO contacts about the Salisbury "Novichok" chemical agent, and heard that what the UK government was telling the public was a lie - the Porton Down facility had been completely unable to identify the source of the chemical used in the Salisbury attack.   It has also since been verified by many  sources that almost any country is capable of making "Novichok", a chemical whose recipe has been publicly available since the mid 1990s (using the Look Inside feature, it's on page 449) and which has been researched by many EU countries since 1999, and which is known to have been produced by Iran in 2016. And the UK and US have both made it, while the US even patented weaponized "Novichok" in 2015.   Many of these details were brought to light by Craig Murray, who is a former UK ambassador to Uzbekistan, where Novichok was developed and tested. Craig Murray had also visited the Uzbekistan facility during its dismantling, which was done by the USA in 1999, with the US becoming responsible for the facility's housed remaining stockpiles of chemical agents, to dispose or do otherwise with them.   As a result of Craig's reporting the truth, the UK government was cornered into admitting it had been lying to everybody when it said that any analysis had confirmed Russia to be responsible. The UK government was then caught lying about its earlier lying about the Salisbury agent, and was also caught deleting a Twitter post in which the UK government asserted that Porton Down had verified the Salisbury agent to have come from Russia.   https://www.rt.com/uk/423075-porton-down-skripal-proof/ https://www.rt.com/uk/423162-russia-poison-government-twitter/ https://www.rt.com/uk/424478-skripal-opcw-origin-poison/   Craig Murray also reported on the internal negotiations between the UK government and Porton Down, where the UK government had coerced Porton Down into signing-off use of the phrase 'of a type developed by Russia' when describing the Salisbury attack agent - which was designed to manipulate and connive the public into assuming that the Salisbury agent came from Russia, despite the only semi-accurate meaning of 'of a type developed by Russia' being that it could refer to the fact that the USSR originally developed the "Novichok" class of chemicals.   The UK government thought that so long as 'of a type developed by Russia' had almost a sliver of truth to it, that that would make it permissible to use to convince the public of a wholly different understanding: That it implied the Salisbury agent had any kind of association with Russia. Of course, the "Novichok" family of chemicals wasn't developed by Russia, either, but by the USSR - so the UK government and Porton Down's agreed 'of a type developed by Russia' phrase was a lie, no matter which way it's looked at.     Craig Murray has now just reported what he thinks is plausible Western responsibility for the poisoning of the Skripals:   Probable Western Responsibility for Skripal Poisoning
     

Delicieuxz

Delicieuxz

 

The Software Enigma Machine blog Pt. 2: The Rotors

Part 2 in the making of the Software Enigma Machine   The Outline A recap on the outline.   Part 1 What is the Enigma Machine? Why did I choose to implement the Enigma machine? Before Programming Begins: Understanding the theory of the Enigma Machine Finding a programming environment Part 2 Programming the features Rotors Rotor housing Part 3 Plug Board Part 4 GUI If you'd like to look at the source code, it's uploaded on GitHub.   Programming the Features This is where the programming magic begins, and how I went about coding each component.   The Rotors As a recap, the physical properties of the rotors are that they have pins, in this case 26 for the letters of the alphabet, a pin on one side maps to a pin on the other, and the rotors need to be able to "advance." One thing, I didn't want the rotors to be fixed to scrambling letters. So the pin names are referred to by their number, not by any letter or other character. And to help describe the "input" or "output of the rotors, since technically both sides could be an input or an output, I refer to the "left side" or "right side" of the rotor. The right-most rotor is always rotor 1, and is the first and last rotor the signal travels through.   So this makes the pin mapping easy: it's a List of integers. A List in C# is accessed like an array, so the index of the List are the pins on the "right side", while the value of that element in the List are the pins on the "left side." So to get a right side to left side mapping: return mapping[Pin]; However, there's a problem, how do you get the left side to right side mapping? There could be another List that contains the opposite mapping, but Lists have a method, IndexOf, that finds the first index that has a value. While this might be a problem if we had mappings to the same pin, the mapping is always 1:1 and unique, so there's no danger of using this method and have it return something that isn't actually where a pin is mapped. So getting the left to right mapping is: return mapping.IndexOf(Pin); That takes care of the basic aspect of the rotors. But they still have to "advance" or have an offset such that the same letter gets a different pin each time its selected. Also, each rotor has to have a different offset, since they don't advance at the same time. At first I tried doing some math with the Pin and adding an offset, but nothing lined up. And to make matters worse, I started with four rotors and I wasn't even sure if the rotors were working. So this confused the heck out of me when it was wrong and I couldn't figure out why. So instead I opted for the rotors to create a temporary list that copies entries from the original but at an offset. private List<int> getOffsetList() { List<int> tempList = new List<int>(); for(int i = 0; i < mapping.Count; i++) { int position = (i + pinOffset) % maxPins; tempList.Add(mapping[position]); } return tempList; } This worked, so I continued work from there. However I wasn't quite satisfied with this solution. While my initial check-in of the code has this, I went back to change how the mapping with the offset is obtained. But since this method works, I used this as a reference point to develop the new one using what pin is used as the input and what the current offset is. The result is: public int GetPin_RtoL(int Pin) { Pin = Pin + pinOffset; Pin = Pin % maxPins; return mapping[Pin]; } public int GetPin_LtoR(int Pin) { Pin = mapping.IndexOf(Pin) + (maxPins - pinOffset); Pin = Pin % maxPins; return Pin; } So to explain how this works: Let's look at this rotor. The input is pin A, which is connected to rotor pin L-1. The output doesn't matter, but going through the loop, the rotor advances and becomes this:   Pin A is now connected to rotor pin R-2, which will output Pin L-4. But from the perspective of the rotor, it really looks like this:   So to account for this, the rotor offset is added to the input pin. Since Pin A would've connected to rotor pin R-1, adding the offset (1 position in this case) and doing a modulo operation makes Pin A map to rotor pin R-2.   Now, what about from the left to right mapping? To start, let's say the rotor has advanced three positions: The input is coming into pin L-1 which maps to pin R-3. But from the rotor's perspective it's:   The problem is that the IndexOf method will return Pin R-3. If taken at face value, the app will think R-3 is the real output, which in the non-offset way maps to pin C (confused yet?). So we have to take the output pin value and modify it so the app believes the correct ABCD pin was outputted (in this case Pin D). To modify this, take the reverse mapping as normal, add how many pins there are on the rotor, then subtract the offset. Then, take that value and perform a modulo by the number of pins to get the correct mapping.   EDIT: Realizing how confusing that explanation is, here's a diagram to visualize how to get the answer:   With the main logic out of the way, all that's left is to get what the current position (or offset) the rotor is in and a way to set it. Those are straight forward and easy.   The Rotor Housing The rotor housing, for lack of a better name, contains all of the rotors and handles the interface between the rest of the system and the rotors themselves. These interfaces are: The key press scrambler What position the rotors are in Set the position of the rotors The constructor for this class can set an arbitrary number of rotors, how many pins they have, the random seed to use (this is to control the RNG), and a List that maps what "key" is tied to which pin. The mapping is used to translate a string character to a number since the rotor class uses a number. The number the rotor spits out is used to figure out which character to use as the encoded text.   There are two private methods to run the key input through the rotors and another to advance the rotors. In either case, since the rotors are in a List and a List can be accessed like an array, a first cut approach could be to use a for-loop to loop through each rotor. So the code would've been something like this: string output = ""; for (int i = 0; i < InputText.Length; i++) { string letter = InputText[i].ToString(); int inputPin = charMapping.IndexOf(letter); int outputPin = rotors[rotor].GetPin_RtoL(InputPin);; if (inputPin > -1){ for (int rotor = 1; i < rotors.Count; rotor++){ outputPin = rotors[rotor].GetPin_RtoL(outputPin); } /* Reflector rotor */ outputPin = rotors[rotor].GetPin_RtoL(outputPin); for (int rotor = rotors.Count; i >= 0 ; rotor--){ outputPin = rotors[rotor].GetPin_LtoR(outputPin); } output += charMapping[outputPin]; } else { output += letter; } } Seems simple enough, but I didn't want o take this approach. If only because I didn't like the multiple for-loop usage. So instead I decided upon a recursive method that looks like: private int getRotorOutput(int InputPin, int RotorNum) { int outputPin = 0; if(RotorNum < rotors.Length - 1) { outputPin = getRotorOutput(rotors[RotorNum].GetPin_RtoL(InputPin), (RotorNum + 1)); outputPin = rotors[RotorNum].GetPin_LtoR(outputPin); } else { outputPin = rotors[RotorNum].GetPin_LtoR(InputPin); } return outputPin; } The method takes in an input pin and which rotor to start on. If the rotor number isn't the reflector (noted by being the "last rotor"), it calls the function again. However the input that gets fed is the output of the rotor mapping (going right to left). When the reflector is hit, it doesn't call the method again, and so it returns the output. When the method returns with the right side pin being used of the last rotor, gets the left to right mapping, and returns that.   Admittedly, the for-loops method is easier to understand, but I wanted to use a recursive function for this on the idea that the rotor count is arbitrary and having multiple loops didn't sound appealing. A similar thought process was done with the rotor advancing method. To advance each rotor properly (the right side advances every time, the next one advances when the right side makes a full cycle, the next one advances when the previous one makes a full cycle, etc.), a straightforward approach would be to do something like: bool advanced = false; rotors[0].advanceRotor(); if(rotors[0].GetPosition() == 0) advanced = true; for(int i = 1; i < rotors.Count; i++); if(advanced == true){ rotors[i].AdvanceRotor(); if(rotors[0].GetPosition() == 0) advanced = true; } else break; } The recursive method is: private void advanceRotor(int RotorNum) { if (RotorNum == rotors.Length - 1) return; rotors[RotorNum].AdvanceRotor(); if (rotors[RotorNum].GetPosition() == 0 && RotorNum < (rotors.Length-1)) { RotorNum++; advanceRotor(RotorNum); } } Which again, while the for-loop method is easier to read, it also feels a little more complicated. Part of the problem is that, aside from the rightmost one, the other rotors only advance when the previous one made a full revolution. So the for-loop method has checking to see if the rotors made a revolution, and if it did, mark a flag so it can advance the next one. The recursive method doesn't need the flag.   There is a flaw with the recursive method, like most other recursion methods: if there are too many rotors, the stack could blow up. But given the small amount of rotors likely used for this code, the chances of it happening are limited. Unless you really want a 1000 rotor Enigma Machine. Even then I don't think the stack would blow up unless you want a stupid number of rotors.   > On to part 3

Mira Yurizaki

Mira Yurizaki

 

The Software Enigma Machine blog Pt. 1

Since enough people seemed interested in a status I posted some time ago that I thought, hey, it might be a good idea to do a write up. It'll be short enough that it'll actually end!   The Outline This section is just to provide an outline of this series: Part 1 What is the Enigma Machine? Why did I choose to implement the Enigma machine? Before Programming Begins: Understanding the theory of the Enigma Machine Finding a programming environment Part 2 Programming the features Rotors Rotor housing Part 3 Plug Board Part 4 GUI If you'd like to look at the source code, it's uploaded on GitHub.   A bit of background: What is the Enigma Machine? During World War II, everyone had their way of encrypting and decrypting messages. The Germans used a modified commercial product that did this called the Enigma Machine. It has a key for every letter (in my case, 26 for the English alphabet). Each key was electrically wired up so that when it was pressed, an electrical connection was made through it, then to a few rotors where it would come out to another key, which would light up a bulb corresponding to that key. So if you pressed A, it would get scrambled into something like D. Since it was electrically connected, you could also do something like swap letters so that the input would get swapped or the output would get swapped. From Wikipedia, here's a basic diagram of how it works (Pressing A results in a D, though it would've resulted in an S if not for the plug board connection in item 8): Though it wouldn't do much good if pressing A always resulted in a D. To solve this, every time a key is pressed, a rotor advances. In this case, the right most one. And when that makes a full revolution, it advances the next one, and so on. This makes it so that pressing the same letter always results in a different one. A key element in the Enigma machine was the reflector element (item 6). This was required due to how the machine was designed, which introduced a flaw in that a letter cannot encode to itself.   To decode the message, you take the encoded message, set the rotors to same position as when the encoding started, then type in the encoded message. The machine will light up the decoded letter for each encoded one.   In short, the Enigma machine is a rotating substitution cipher. Each combination of rotor positions represents a different substitution cipher that it rotates through automatically. There was also another substitution cipher in the form of the plug board, but this was manually entered and static.   Why did I choose to implement the Enigma machine? I was thinking of some simple projects to do on the side. While nice complex projects can be fun, they can also be a pain and having something done feels more like a sense of an accomplishment than getting 50% there on a complicated project after dumping hours into it.   The Enigma Machine came up because I figured it'd be a relatively good challenge and yet it's simple in theory. So it shouldn't be hard to debug and verify it works. And also World War II is my favorite time period, with the Enigma having interesting stories around it.   Before Programming Begins Most successful programming projects, or just any real project, needs a lot of planning and research ahead of time before the work can really begin. Poorly researched aspects can cause hiccups in the future that may not make themselves apparent until a lot of work has already started.   Understanding the Theory of the Enigma Machine There isn't much point in trying to program something if you don't understand how the system works in general. So step one is to understand how the Enigma Machine works. The most interesting part was the rotors, which aren't really that complex: there are input pins and output pins, and the input pins wire to an arbitrarily picked output pin. So the overall requirements for this system to work is: The pin mapping of the rotors should be randomly generated. However, this should not be uncontrolled random generation. That is, the user needs to input the RNG seed. Otherwise, the usefulness of the program is only good for that instance of the program. Each new instance would have a different randomly generated set of rotors with no way of setting it to what another instance used. For each rotor, an "input" must map to an "output" such that if you give the rotor an "input" value, it returns an "output" value. But if you give it the "output" value, it returns the "input" value. That is: If I give the rotor 1 and it returns 3, if I give the rotor 3, it returns 1. Each time a rotor is used, depending on its position, it needs to "advance". That is the next time I use the rotor, the mapping needs to be offset for the same input. For example: If the letter "A" maps to pin 1 on the rotor, the next time I feed the letter "A" into the rotor, it goes to pin 2 instead. The plug board has a similar idea, in that it's also a substitution cipher that works in the same way as the rotors, namely it needs the second requirement. However, they behave differently as far as the system is concerned so they need different sets of logic to function. These are: The plug board does not automatically advance. i.e., the mapping doesn't change automatically. The mapping can change based on the user needs. The rotors are more or less permanently wired. The plug board, as the name implies, has plugs the user can connect wires into and disconnect said wires. The user-changeable mapping presents a problem, say we have a mapping of A -> B, which means B -> A, and another mapping of C -> D, which means D -> C. What if I want to wire A -> C? The question will be answered in the section where I talk about the plug board design. Then there's the path of how a key press turns into a lit lamp: A key press goes to the plug board to be remapped. By default, the letter maps to itself. The output of the plug board goes into the input if the first rotor. The output of the first rotor goes into the input of the second pin for pin (i.e., rotor 1 outputs pin 2, so it goes into pin 2 of rotor 2) Repeat for N-rotors The output of the last rotor goes into the reflector, which spits out a value to go back into the nth rotor's "output" side, this spits out something on the "input" side. The "input" of the rotor goes into the "output" pin of the previous rotor, and repeat until the "input" of Rotor 1 is given. The returned "input" goes into the plug board to be remapped. The output of the plug board then spits out the letter. The key pressing and lamps are merely inputs and outputs. They don't affect the theory of operation of how the Enigma Machine works.   Figuring out What Language/Environment to Use I wanted this to have a GUI, because eventually I would need to have that keyboard + "light" interface. And because there are components in the Enigma Machine that do independent things and there could be multiples of one of them, I felt a language that supported object-oriented design is best. So these were the requirements for the language and environment I wanted: What has good GUI framework support? What supports object-oriented design? What features of the language makes things like manipulating entries in data structures or creating multiple instances of an object easier? What environment is easy to setup? Out of what I do know: Python: Python has a few GUI libraries, but my experience with this is limited. And what experience I do have isn't quite pleasant (to be fair, it was only with Tcl). I'm aware there are frameworks to make this easier, but I didn't want to spend time looking and testing them out. C++: Similar issue as I have with Python: I'm aware GUI libraries exist, my experience with them are limited (at least from a fresh start), and I don't want to spend too much time learning this. C: While the state of each rotors can be separated and business logic can just take in the state of rotors to work with, this would be hard than I'd like. Plus the issue with finding a GUI framework, though it's likely any C code I'd write would be called from C++ GUIs. JavaScript/HTML: I could, in theory, make the Enigma with this. But I didn't consider it for no reason other than it didn't cross my mind. C#: I'm sure there's some way of writing and compiling C# in a bare bones manner, but I use Visual Studio for C# work. And having worked with Visual Studio and C# to quickly write tools for work, I decided to go with this. Yes I'm aware Visual Studio has C++ support, but I don't have much experience with it. At least for developing Windows Form apps. There's also another reason why I went with C#: I know  it has a lot of data structures that make my life easier, because they come with a bunch of methods that help do what I want. And it's not just the language that has nice features that I want that's included, but setting up a C# development environment is stupid easy. It's just download and install Visual Studio. If I spent a day just setting up my tools, the project would've likely be dead in the water.   I like to think of myself as a lazy developer. Not in that my code is lazily written (most of the time, anyway), but I don't want to spend all day doing something that feels like it should take 10 minutes. The tools exist to make my job easier. If the tool doesn't appear do that, it's a poor tool.
With the planning phase done, the next entry will go over the programming process and how things were built.   > On to part 2

Mira Yurizaki

Mira Yurizaki

 

DMVPN - Basic lab and theory

DMVPN is mentioned in the official CCNA guide and also in the CCNP (specifically Routing and Switching I'm talking here) but it isn't really listed to configure in the exam topics for the CCNP route. The exam blueprints state you need to 'Describe' but if you've ever attempted a Cisco exam before then you might know, that doesn't mean you might get a question related to the configuration side. We are going to be looking at a simple lab with some theory behind DMVPN without the encryption, but a basic explanation what DMVPN is:   DMVPN (Dynamic Multipoint VPN) isn't a protocol within itself, but is crafted by the various protocols used together to achieve what DMVPN does. It allows us to create a hub-spoke like topology with spokes being able to dynamically form a VPN between other remote spokes and the Hub. The protocols that create DMVPN:   -Multipoint GRE -NHRP -A dynamic routing protocol (common: EIGRP or OSPF)   IPSec is also a common protocol used but it isn't actually a requirement (although it is preferred since running plain GRE isn't the best idea...). Technically you don't actually need to run a dynamic routing protocol and have static routes but again it is very common to see a dynamic routing protocol. Before moving onto a basic introduction to configuration and the design, DMVPN can scale very large (thousands of remote sites) and not only allows our spokes with dynamic IP addresses to participate in the design but also the configuration is very effective instead of creating static tunnels for loads of remote sites.   The single hub topology design   This topology will use the internet as the underlay to transport our packets, although we will create an 'overlay' using multipoint GRE to carry our site traffic (10.x.x.x) using EIGRP. In DMVPN, we use the terms 'underlay' and 'overlay' a bit similar to GRE over IPSec where IPSec is used as the protocol to transport GRE otherwise we will have no protection. GRE is normally used to transport different traffic since IPSec itself can only carry unicast traffic, it you want to take advantage of multicast and other types of traffic then you can encapsulate with GRE and then send it over the IPSec tunnel as a unicast packet. In our case, we could even just use IPSec without GRE and just define the neighbors in our routing protocol so our updates and hellos etc.. are sent via unicast instead of multicast, that bypasses the learning and fun we'll see in this post!   Multipoint GRE
  Why not use typical GRE point to point tunnels? Firstly, this defeats the whole purpose what DMVPN achieves, it allows us to manage our design with ease and dynamically form tunnels with remote spokes and with the HUB. If we have a static tunnel configuration, think about it we need X amount of tunnels configured on the HUB depending how many spokes are in our design and then a tunnel from the spoke to the HUB, and then finally a tunnel from SpokeX to every single other spoke that exist if you need Spoke-Spoke communication without traffic traversing through the HUB.   Multipoint GRE allows a single tunnel configuration to then dynamically form tunnels without the need of loads of 'interface tunnel x' in the configuration. It can take the configuration of the single interface and then use NHRP to dynamically form tunnels to other routers.   NHRP   Next Hop resolution protocol is the protocol in DMVPN which makes it possible for spokes to register their public IP address according to their tunnel interface IP address whether the public facing interface is static or dynamic. Everyone explains NHRP like ARP but on the internet instead of within a local LAN. The protocol works as a server-client model where clients would point to a server to register their address (more specifically their NBMA aka Non Broadcast Multi Access). We will look at NHRP in more detail not only with configuration but also verification commands and more theory when we actually see outputs.   Dynamic Routing Protocol
As I've mentioned, a routing protocol isn't actually a requirement for DMVPN although as you may know, a dynamic routing protocol makes routing more scalable when working with a large amount of subnets/networks. We will be using EIGRP in this example.   IPSec   There are many design guides and generic guides on the web which show different methods such as using an IPSec profile directly in IOS or even having a firewall which offloads the resources for IPSec tunnels and then a router performing the GRE/NHRP etc.. In our example, I won't be using IPSec since the ipsec configuration is straight forward to lab but also very easy to setup using preshared keys, it gets more interesting when you begin to introduce a PKI server for certificates and IPSec enrollment instead of using keys/shared secrets...   Basic configuration Starting with the basic configuration of all the routers so you can follow along: Starting with a basic check, we can ping each spoke from the HUB: HUB#ping 1.0.0.1 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 1.0.0.1, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 5/5/6 ms HUB#ping 2.0.0.1 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 2.0.0.1, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 4/5/6 ms HUB#ping 3.0.0.1 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 3.0.0.1, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 5/5/6 ms Firstly, lets start with some basic tunnel configuration. What we need to configure, an overlay which will use the 192.168.254.0/24 network for the tunnels to communicate. Lets go ahead and actually configure some other important commands on our HUB which will also act as the 'Next Hop Server aka NHS' for NHRP.   HUB Configuration (Phase 1) interface Tunnel0 ip address 192.168.254.1 255.255.255.0 no ip redirects ip nhrp map multicast dynamic ip nhrp network-id 10 tunnel source GigabitEthernet0/0 tunnel mode gre multipoint tunnel key 1 ip nhrp map multicast dynamic On the hub, this command serves to map multicast packets to the mappings that are created within the NHRP database. ip nhrp network-id 10 This is similar to the tunnel key command, where we can identify specific NHRP networks but this must match on all routers, this is required in a NHRP configuration. tunnel key 1 The tunnel key command in a tunnel configuration mode allows us to define which tunnel specific packets belong to, this is important when we have multiple tunnels on the interface and as a best practice I like to specify this even with a single tunnel configuration.    Spoke Configuration (Phase 1) interface tunnel 0 ip address 192.168.254.(x) 255.255.255.0 !Spoke-1 .10, Spoke-2 .20 and Spoke-3 as .30 no ip redirects ip nhrp map 192.168.254.1 20.0.0.1 ip nhrp network-id 10 ip nhrp nhs 192.168.254.1 tunnel source GigabitEthernet0/0 tunnel mode gre multipoint tunnel key 1   Let's capture some packets! If I shut down the tunnel interface on Spoke-1 and turn it back on, this looks like the things thing that happens relating to NHRP, which also reflects the configuration we have done. Let's look into the NHRP packet itself and then see what conversation is going on. We'll look into the interesting stuff without getting into too much depth:   Firstly, Spoke-1 sends a NHRP Registration request (to 20.0.0.1 which is the HUB), you can see this request holds some information which will build the NHRP database we will see shortly. Spoke-1 actually announces its own NBMA address and the protocol address (in our case its our tunnel: 192.168.254.10, destination to 192.168.254.1 the tunnel interface on the HUB). These NHRP requests will be sent every 1/3rd of the Hold timer which by default is 7200s (found under the 'Client Information Entry'). The client expects a reply and will keep sending out NHRP requests double time (from 1, 2, 4 etc.. to 32... that is the theory for those CCNP exam takers!)   Next, we receive a reply from 20.0.0.1 (HUB), which looks like:   If we take a quick look at RFC2332, its states that Code 0 is indeed a successful register with the NHS. The next 2 packets were actually a repeated request/successful request which we won't dive into because they look the same as the above 2 request and reply NHRP packets.   With all the spokes configured, this process happens fairly quickly in our lab environment and we can now see a populated NHRP database which can be found using: HUB#show dmvpn Interface: Tunnel0, IPv4 NHRP Details Type:Hub, NHRP Peers:3, # Ent Peer NBMA Addr Peer Tunnel Add State UpDn Tm Attrb ----- --------------- --------------- ----- -------- ----- 1 1.0.0.1 192.168.254.10 UP 00:16:59 D 1 2.0.0.1 192.168.254.20 UP 00:15:08 D 1 3.0.0.1 192.168.254.30 UP 00:14:54 D HUB#ping 192.168.254.10 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 192.168.254.10, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 6/6/8 ms HUB#ping 192.168.254.20 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 192.168.254.20, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 6/6/8 ms HUB#ping 192.168.254.30 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 192.168.254.30, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 6/6/7 ms Do you think we would be able to ping Spoke-1 (192.168.254.10) from Spoke-2? Spoke-2#ping 192.168.254.10 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 192.168.254.10, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 6/12/25 ms The answer is yes! Although something happens behind the scenes. How could Spoke-2 possibly know how to get to 192.168.254.10? What happened was Spoke-2 actually send an NHRP request to its NHS (192.168.254.1). Because we have mapped the public IP address 20.0.0.1 to reach the HUB/NHS we can instantly send a request for 192.168.254.10.     You can see above, we sent our NBMA and the Tunnel address, but the destination is 192.168.254.10. We are going to practically be asking, what is the NMBA address for 192.168.254.10? Now this is the part where NHRP gets interesting, try to see if something looks different below:   If we just explain a quick overview, we send an NHRP request for 192.168.254.10 to 20.0.0.1 (which is our NHS). When the request hits the NHS, it will actually send it to the NMBA which is registered in the NHRP database (being 1.0.0.1). Spoke-1 (1.0.0.1) actually replies with its information (NMBA and Tunnel address 192.168.254.10). If we do a traceroute from Spoke-2 when the NHRP table is cleared on Spoke-2, have a look at the results that prove this:   Spoke-2#traceroute 192.168.254.10 1 192.168.254.1 9 msec 192.168.254.10 7 msec 6 msec Spoke-2#show dmvpn Interface: Tunnel0, IPv4 NHRP Details Type:Spoke, NHRP Peers:2, # Ent Peer NBMA Addr Peer Tunnel Add State UpDn Tm Attrb ----- --------------- --------------- ----- -------- ----- 1 20.0.0.1 192.168.254.1 UP 00:27:00 S 1 1.0.0.1 192.168.254.10 UP 00:00:23 D Spoke-2#traceroute 192.168.254.10 1 192.168.254.10 8 msec 7 msec * If the entry is not in our NHRP database, then the first few packets/traffic will traverse through the HUB until we receive the reply with the NBMA address of Spoke-1. This is the dynamic part of DMVPN already in action, because we learn the address to send traffic to if we want to directly communicate with that Spoke.   When we start advertising our networks from the spokes, this will change and then we can start talking about the different phases that can change the flow of traffic and how routes are propagated throughout this DMVPN design. We are going to configure EIGRP to setup a relationship which each neighbor but also advertise the loopbacks into EIGRP. router eigrp 1 network 10.0.0.0 0.255.255.255 network 192.168.254.0 0.0.0.255 We can put a more granular network statement to chose what participates into EIGRP but let us keep it simple and sweet. We'll look at the phases in DMVPN which can change our traffic flow and how we learn routes. Before moving on, we can come across an issue with EIGRP neighbor flapping with the tunnels, we must include a command in our tunnel configuration on each spoke which allows us to map multicast traffic to the NBMA address of the Hub. interface tunnel 0 ip nhrp map multicast 20.0.0.1 Confirming EIGRP neighbors on the HUB: HUB#sh ip eigrp ne EIGRP-IPv4 Neighbors for AS(1) H Address Interface Hold Uptime SRTT RTO Q Seq (sec) (ms) Cnt Num 2 192.168.254.30 Tu0 14 00:02:02 12 1506 0 5 1 192.168.254.20 Tu0 13 00:02:07 624 3744 0 5 0 192.168.254.10 Tu0 11 00:02:16 9 1506 0 6 EIGRP issues If we have a look at the routes that the HUB has dynamically learned via EIGRP: HUB#sh ip route eigrp 10.0.0.0/8 is variably subnetted, 11 subnets, 2 masks D 10.10.1.0/24 [90/27008000] via 192.168.254.10, 00:05:46, Tunnel0 D 10.10.2.0/24 [90/27008000] via 192.168.254.20, 00:05:38, Tunnel0 D 10.10.3.0/24 [90/27008000] via 192.168.254.30, 00:05:30, Tunnel0 There is an issue that can occur because of the default behaviour with EIGRP, if we take a look at the routing table for Spoke-3: Spoke-3#show ip route eigrp 10.0.0.0/8 is variably subnetted, 6 subnets, 2 masks D 10.0.0.0/24 [90/27008000] via 192.168.254.1, 00:06:29, Tunnel0 D 10.0.1.0/24 [90/27008000] via 192.168.254.1, 00:06:29, Tunnel0 D 10.0.2.0/24 [90/27008000] via 192.168.254.1, 00:06:29, Tunnel0 D 10.0.3.0/24 [90/27008000] via 192.168.254.1, 00:06:29, Tunnel0 We can see routes behind the HUB (eg. loopbacks) that can successfully be reached via the Tunnel interface, the issue is with routes from other spokes. The default behaviour with EIGRP is to not advertise a route out of an interface which it was received on (eg. Tunnel 0), this is a very good example of Split Horizon which is also apart of RIP and how that protocol works. We can simply solve this with an interface command on the HUB: interface tunnel 0 no ip split-horizon eigrp 1 Looking back at the routing table for Spoke-3: Spoke-3#show ip route eigrp 10.0.0.0/8 is variably subnetted, 8 subnets, 2 masks D 10.0.0.0/24 [90/27008000] via 192.168.254.1, 00:09:07, Tunnel0 D 10.0.1.0/24 [90/27008000] via 192.168.254.1, 00:09:07, Tunnel0 D 10.0.2.0/24 [90/27008000] via 192.168.254.1, 00:09:07, Tunnel0 D 10.0.3.0/24 [90/27008000] via 192.168.254.1, 00:09:07, Tunnel0 D 10.10.1.0/24 [90/28288000] via 192.168.254.1, 00:00:12, Tunnel0 D 10.10.2.0/24 [90/28288000] via 192.168.254.1, 00:00:12, Tunnel0   DMVPN Phases The phases are kind of steps during the DMVPN process when you have: Phase 1) Only Hub-Spoke traffic Phase 2) Spokes can then dynamically form tunnels with other spokes, no need to go through the HUB (firstly initial traffic will go through HUB because of the NHRP request) Phase 3) Spokes can dynamically reply to a NHRP request and spokes can work together without the HUB to initiate traffic between them   Phase 1 During phase 1, our traffic will ALWAYS go through the HUB because although we have turned off 'split horizon', the HUB will advertise the routes from other spokes via itself. The next hop IP address in the routing table will show the HUBs IP address as shown below: (Notice all routes are reachable via 192.168.254.1)   Spoke-1#show ip route eigrp 10.0.0.0/8 is variably subnetted, 8 subnets, 2 masks D 10.0.0.0/24 [90/27008000] via 192.168.254.1, 00:49:16, Tunnel0 D 10.0.1.0/24 [90/27008000] via 192.168.254.1, 00:49:16, Tunnel0 D 10.0.2.0/24 [90/27008000] via 192.168.254.1, 00:49:16, Tunnel0 D 10.0.3.0/24 [90/27008000] via 192.168.254.1, 00:49:16, Tunnel0 D 10.10.2.0/24 [90/28288000] via 192.168.254.1, 00:40:05, Tunnel0 D 10.10.3.0/24 [90/28288000] via 192.168.254.1, 00:40:05, Tunnel0 If we simply use a command on the HUB, we can allow the routes to be pushed out without the HUB adding itself as the next hop to reach the network. This is also moving the DMVPN into phase 2 where direct communication between spokes don't need to transverse the HUB all the time. interface Tunnel0 no ip next-hop-self eigrp 1 Before looking into what this does, now we will take another look at the routing table: Spoke-1#show ip route eigrp 10.0.0.0/8 is variably subnetted, 8 subnets, 2 masks D 10.0.0.0/24 [90/27008000] via 192.168.254.1, 00:00:21, Tunnel0 D 10.0.1.0/24 [90/27008000] via 192.168.254.1, 00:00:21, Tunnel0 D 10.0.2.0/24 [90/27008000] via 192.168.254.1, 00:00:21, Tunnel0 D 10.0.3.0/24 [90/27008000] via 192.168.254.1, 00:00:21, Tunnel0 D 10.10.2.0/24 [90/28288000] via 192.168.254.20, 00:00:21, Tunnel0 D 10.10.3.0/24 [90/28288000] via 192.168.254.30, 00:00:21, Tunnel0 We can now see, 10.10.2.0/24 via 192.168.254.20 and 10.10.3.0/24 via 192.168.254.30. This command will not make the HUB advertise the routes via itself. Back to Phase 3, the spoke itself can reply directly to a request because currently the request is being sent to the HUB and then the HUB is forwarding that request towards the destination.   Here is an example of a basic packet capture when Spoke-1 tries to ping 10.10.3.1 (Spoke-3):   You can see, the original source (1.0.0.1 - Spoke-1) is sent towards 20.0.0.1(HUB) and then, 20.0.0.1(HUB) sends it to 3.0.0.1(Spoke-3). To make this into Phase 3, we can simply add 2 commands on the hub and then a command on each spoke: !HUB interface tunnel 0 ip nhrp redirect ip nhrp shortcut !SPOKES interface tunnel 0 ip nhrp shortcut Its 3:34AM and I need sleep (said this an hour ago...) so will update this when I get some time tomorrow...

BSpendlove

BSpendlove

 

Destiny 2 - Why? 18/04/2018

So as visible from the title this ones on the famous flop of a game Destiny 2 which was launched on all platforms. So what is their to rant about with destiny? Their base game is actually designed well, inventory system is clean and flush, The maps are clear and understanding, Gameplay in PVE is well and when you get use to it if your a Destiny 1 player the Crucible is alright. So what is the problem with the game? Oh right the End-Game content and content in general.    So to start off lets talk about the End game content. Hmm wait there isn't much really, You have Public events and patrols, the weekly and daily milestones, Nightfall and the Raid. But that sounds like a decent amount of end game content. Well in retro speck it isn't purely due to a system they implemented called 'Clans'. When you in an active clan you have many members going around doing things for the clan and getting the clan reward for all members so whats the problem with this? well the system includes Raid equipment drops and not just the Tokens (which I would prefer) so people who don't do the raid will have a chance of getting a raid weapon or gear for minimum effort and to kick in the teeth of the ones who ran the raid the clan engram for the raid will drop it at a higher light level... So like I mentioned if you are in an active clan you can get these drops weekly and with less effort.    Now what else can I talk about? well the remaining of the end game content is stale (prior to the nightfall update) nightfalls previously didn't drop specific gear for said nightfall, Public events just get repetitive and milestones are usually done within a day (excluding Raid depending on your play style) So what could they have done? Well below i'm gonna give some opinions.   1. Bring back the Daily/Weekly Campaign missions at a harder difficulties, This can provide replay ability and also give the player another thing to do and possibly find a rare weapon only available within the Daily/Weekly campaign event... *Cough* Black Spindle *cough*   2. Add an intractable door in the tower which takes you to the last city. So to explain this in detail the door would bring you to the last city which would have patrols that would have you clearing out the remaining cabal that are within the city, Have an area like Court of Oryx on the Dreadnaught where you must fend off a wave of Cabal attacking the city from the wall or ones heading towards the tower. It could include things like intractable AI which will give you quests or simply a different prospective of the fall to add to the already 'existent lore', and when you have explored for a while your vanguard will radio you saying that they have a reward for you when you return for helping in the repair of the last city (Daily rewards) nothing amazing maybe vanguard gear which is a light level higher than the current peace you have or randomly give you and exotic engram.  This would give more end-game as having a court of oryx type event happen that you can start can lead to more gear to collect and well a fun massive event that more than 6 guardians can participate in.   3. Add back the Randomised weapon roles (I heard they are possibly being brought back) This added to the end-game in a way because people would grind to get the 'perfect role' of a weapon and this would then in return improve the crucible than just seeing the Uriels Gift and all the other generic weapons that are soo so common in the Crucible.    4. Remove the Raid gear engram from the clan post, Meaning they will not get raid gear but be given tokens for the raid vendor (Which only unlocks when you ran the raid once) So this give people the incentive to run the raid to use said tokens and will also provide players with more things to do and or raid helpers. So for this I was thinking you would need to get this drop of tokens for 3 weeks in a row to get one piece of armour/weapon form the vendor.    Like this is minor things that could use recycled aspects of the game to make possible, You have the assets for the City already since it was used in the second mission (technically second mission) you have the script for the randomised roles on weapons from Destiny one and the same script for the daily/weekly missions   No to be fair this wasn't much of a rant but my opinion on the game and showing what I see as flaws for the hardcore area of the community 

Alex Colson

Alex Colson

 

Fortnite Rant 16/04/2018

So as the Title spells out for you all this is me going off on a wobbly over Fortnite, This fairly new Battle Royale game made from Epic games. Now I have many things to give off to epic games about this games system etc but firstly brief intro on the game.    Fortnite is a Game which has two modes save the world and 'Battle Royale'. I'll be Ranting on the Battle Royale aspect of the game which is a 100 player island survival and who every is last standing get to be lonely on a island that is not destroyed by giant towers, half destroyed buildings and no way to escape a storm which surrounds it. So my issues with this game dominantly comes from two aspects the Building and the Guns. Going off on the guns it removes a high part of Skill from the game as the better the weapon the higher the damage it outputs per shot. This means if your in the end zone with a grey weapon you may as well turn the barrel and end yourself as it ain't gonna be easy. So to me I don't like the aspect that one of the key aspects in winning is getting a higher coloured weapon instead of skill.    Upon other things the Weapons have this really odd RNG bullet recognition damage system I assume, Where you can hit someone point blank range and get a measly 8 dmg show up from a shotgun, which is completely absurd due to the fact the guns already have increasing damage based on the colour of said weapon why add in a RNG bullet which is like, "hey 8 dmg  you know that your dead now because of that measly 8 dmg" . Anyway other than those areas the gun play is good I do enjoy the gun play when engaging players from time to time.    Coming onto the other Rant topic would have to be the building, And no before people say just get better at building it isn't that aspect. I Don't like the building because it leaves players to not plan out their movement as much and just run from point A - B  as the bird flies. Which is basically because you can just run and as once you get shot at or hit you just build 4 walls and a ramp to bunker down.... which just to me kills to strategy aspect of the game when solo because how are you meant to plain going against a player doing that if your low on ammo? Like you can't waist shots on him and then he gets the advantage as he can peek you without being too out of cover. Only option is to do the same of fuck off away from the dude. Soo yeah the building is a good concept to the game but at the same time removes some skills that could be used if without it. (I know this would just make is a cartoon Battle Royale without its own spin off).   Finally the last thing that just get me soo annoyed In the game and doesn't really affect me would have to be the constant Adding of Content to a basic game mode. Like I know other companies do this for their games but no where near as often as this. Like since Fortnite Battle Royale launched it has had a Chain gun, Crossbow, impulse grenade, boogie bomb, Hunting rifle, Homing Rocket, Hand Cannon, InstaFort and other if I have missed any within a short span. The fact that Fortnite needs to keep bringing out content to have people enjoy it kind of begs the question, Does the players play Fortnite for the base game or for the new gear, Skins, cosmetics and loot that you can acquire via money or via finding in game for the match. Like I'm only basing this off what I know, People who still play PUBG only have packs to get clothing and even then it doesn't control the game and why people play it. yet for me it looks like the skins n all is why people play Fortnite which is fine, if you like wasting money on making a nice looking character that will eventually be abandoned because it is no longer the craze.   So to summarize this I would have to say why do people play this game? Is it for the Actual gameplay or for the cosmetics and gear you get? What would make me enjoy this game as the shitty player I am? Well if the Bullet damage was static meaning depending on range it will do said damage in this body area and none of this hey 8 damage at close range bs with the Shotguns, That and well just cleaning up the game give players other play styles than build n hide or build and run.    So that's the end of my Fortnite Rant that was fuelled by the rage of getting the damn 8 dmg pump hit markers in yesterdays game.... Please feel free to leave your feedback or toxic hate of my view on the game. As I said it is only my opinion others could agree of disagree on it. 

Alex Colson

Alex Colson

 

Teacher build.

PCPartPicker part list: https://ca.pcpartpicker.com/list/8vP9GG
Price breakdown by merchant: https://ca.pcpartpicker.com/list/8vP9GG/by_merchant/ CPU: Intel - Core i5-8400 2.8GHz 6-Core Processor  ($229.95 @ Vuugo) 
Memory: G.Skill - Aegis 8GB (1 x 8GB) DDR4-2133 Memory  ($102.99 @ Newegg Canada) 
Storage: ADATA - SU800  128GB M.2-2280 Solid State Drive  ($66.99 @ Newegg Canada) 
Storage: Western Digital - Caviar Blue 1TB 3.5" 7200RPM Internal Hard Drive  ($47.99 @ Newegg Canada) 
Video Card: Gigabyte - GeForce GTX 1060 6GB 6GB D5 6G Video Card  ($439.99 @ Newegg Canada) 
Power Supply: Corsair - CXM 550W 80+ Bronze Certified Semi-Modular ATX Power Supply  ($81.99 @ PC-Canada) 
Other: ASUS TUF H310-Plus Gaming LGA1151 (300 Series) DDR4 HDMI VGA M.2 ATX Motherboard  ($112.65) 
Total: $1082.55
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2018-04-15 00:12 EDT-0400

Being Delirious

Being Delirious

 

Outpost 2 - A game that should be given a second chance

I recently had a bug of a niche side of the city building genre: colony building. But then I remember an old standby that I still think should be used as the yardstick for all colony building games. That game is Outpost 2: Divided Destiny.   What is Outpost 2?   Outpost 2 was a game developed by Dynamics and released by Sierra in 1997. It can be summarized as Sim City meets Command and Conquer. Well, loosely speaking.   The premise is that humans have left earth because an asteroid annihilates it. And in their time drifting, they reject planet after planet until they find one is close enough. In desperation due to running low on resources, they land and hope to start anew. So there's a colony building aspect.   But humans being humans have different ways of solving a problem of trying to survive. One side wants to terraform the planet. The other wants to adapt to it. And things get heated and point lasers at each other. There's the combat aspect of it.   What makes Outpost 2 great? The first part has to do with the colony building aspect. Depending on the scenario, you start off with a central command center that is the heart of your colony: if it goes, the colony goes. But from thereon, you build, expand, get resources, make babies, research! Speaking of resources, there are plenty to manage, but most of the time you're not monitoring all of them constantly. The ones are metals to build stuff, food to feed the colonists, power to power structures, the colonists themselves (broken up into children, workers, and scientists), and the one you'll mostly be keeping an eye out, colony morale. The neat thing is a lot of the time you don't really have to actively manage them. You get regular reports both with a voice over and "pinging" if something needs attention or something happened.   The colony building itself is done by you. You put down the structures, layout the infrastructure, and generally make the colony. Speaking of which, there's only one infrastructure to worry about: tubes. And perhaps bizarrely, this is only to connect to the command center. Power is transmitted wirelessly. Food magically appears to colonists. Refined metals just teleport.   Here's a snippet of a colony from a scenario I'm playing:   There's a middle ground between macromanaging and micromanaging. You don't really need to tend to any single building or unit. The only time you do is because you want to use the building's function. But at the same time, you may need to juggle resources. If there's not enough workers to run a building, you may consider "idling" one until you get another worker. Morale may be the only one you really have to look out for, because there are many ways to influence it. Some directly, some indirectly.   The other part of the game is the lore. A whole novella was written and included in the game. It's something you have to read though after the mission briefing, but a surprising amount of backstory was available from the get-go. The game also came with an online manual (which was really just a large help file) and even the description of the units and structures get a blurb about what life is like for the people trying to survive. I mean, here's an example for a Vehicle Factory: And this extends a bit further in game. I mentioned there's research, which was starting to become the next big feature in RTS games of the time with regards to the tech tree. But I don't recall an RTS around that time that went into a level of detail such as this:     I mean, most other games probably would've cut it off at the first paragraph. The second part though, makes the game feel more immersive. Instead of the what the research is providing, it's telling me the why. Why should I spend my valuable scientists on this research? And not only that, the level of science fiction used in this game could be closer to hard science fiction than not. Sure there are some implausible technologies thrown around like "cool fusion" and "boptronics" (a combination of biological, optical, and electrical gadgets), but take this for example: The outcome might be a bit of a stretch (they found a way to use conductive fluid to generate electricity due to the planet's shifting magnetosphere), but this research topic is a real thing: https://en.wikipedia.org/wiki/Magnetohydrodynamics   This sort of thing tickled my imagination way back when I found it. It still sort of does. Either way, the amount of detail that went into this game, for an RTS that didn't deal with historical points or whatnot, is amazing.   What isn't so great about Outpost 2? I may love this game, but that doesn't mean it's not without its faults.   A large part of it is RNG. This game is full of it. While you can control how much metal you have, how much power you generate, and how much food you can produce, anything involving the colonists are random. You can only influence them. This means that in the beginning, you'll be struggling to break even as you have to squeeze every bit of resources you can get. And what makes things worse, one resource, Scientists, do not automatically generate and require scientists to create (by way of a University... which needs a scientist). So you can doom your colony to failure.   In addition to the colonists, the planet the people settled on is active. It has its natural disasters. While most of the time they take place away from your colony, you still have a chance of say a tornado spawning in the middle of your colony and wrecking everything. Damage control will be extensive.   But hey, maybe this RNG is part of the charm. You are trying to survive after all. But if this is too much random chance, it might be best to skip this.   There's also the announcer. While it's helpful at times to give you periodic resource reports, later in the game it starts to get a bit too chatty. One thing it announces? Every natural disaster. You can also research early disaster warning systems... which the announcer also announces. Of course, you can turn it off and still get reports, but it also means you have to keep an eye out on the resources tab more often.   And lastly, but this is more of a preference, the game is slow paced. One complaint I saw in a review of it at the time is that the game is slow. Not that it ran slow, but it's slower than others. It can actually take a while for units to move a good distance of the map. But this was at a time when RTS games were compared to Command and Conquer and Warcraft.   You did mention a combat system... I did! But part of me feels it was tacked on as an afterthought to make it try to compete with other RTS games at the time. It's not that it's bad and there's justification for it. But at the same time it feels slightly out of place.   In any case, combat is mostly relegated to robot vehicles (can't have the colonists risking their lives now). They shoot things. There's pros and cons. There's also guard posts that shoot things but they can't move.   But wait, this is a "2". Where's "Outpost"? (Or why Outpost 2 never really took off) The original game had a similar idea: Earth is dead, so go find a planet and build a colony. But the problem is that the game was released lacking features that someone hyped, it was unpolished, and the worst sin that any game can commit: it was unstable. This left a bad taste in people's mouths. So when a sequel came out, not only was it facing the genre defining games that I mentioned, but people still looked at this as yet another soppy colony building game that couldn't pass muster.   How do I get a hold of this game? So far it's being sold at Amazon from third party sellers for a fairly decent price: https://www.amazon.com/Outpost-2-Divided-Destiny-PC/dp/B0006OFKOQ/   I'd say if you like colony building games, this is one you should at least try. I feel this sets the bar for other colony building games. If not just for the game mechanics, but also how much they put into the lore and story.    

Mira Yurizaki

Mira Yurizaki

 

How to Get Microsoft Office Specialist Certification

How to Get Microsoft Office Specialist Certification? Can you imagine a scenario where a single certification can give you the much-desired and much-required career hike?
 
Yes, it can happen. Not in dreams but in reality. A Microsoft Office Specialist Certification is the key to achieve that. The course if highly reputable in the IT streets and hold a concrete value as well. The certification will instill a comprehensive set of industry-oriented skills and expertise. By doing this certification, you will be well-versed in implementing the integrated modules of Microsoft Office in a real-time situation. It comes with a guarantee to make you proficient in the full features and practicality of Microsoft Office. Why go for Microsoft Office Specialist certification? Because there is nothing that doesn’t require Microsoft Office. Starting from an austere report to the research-based presentation, everything is made with the comprehensive tools of Microsoft Office only. MS Word, MS Excel, and MS PowerPoint are the three pillars of any business and organization and if you are not aware of their know-how then your survival would be very tough. The certification makes you well-versed in these three most vital tools. Here is our guide to getting a Microsoft Certification Office Specialist Certification. Acquire some basic system knowledge beforehand - Before enrolling yourself to real-time Microsoft Office specialist certification, some basic computer knowledge is essential. However, in today's digital world there would be hardly any single soul which doesn’t know how to operate the computer. Still, predictions are not always true. If you don’t know the basics of the operating system then kindly enroll for a Basic Computer learning course. Proceed further by enrolling in Microsoft office courses - Once you acquire some basic operating knowledge, it’s time to get enrolled in a professional Microsoft Office Course. The course will help you to have a better understanding of the extensive features of Microsoft Office. Pick the right certification course - To become a certified Microsoft Office Specialist; you should get a relevant certification. Depending on your skills and your requirements, you can choose particular certification. For instance, professionals who deal with MIS Reports and Sales Reports can go for MS Excel certification. Similarly, MS Word certification is the best option for individuals preparing newsletters, press releases, blogs and any sort of content requirement. Clear the certification exam - The final step is to clear the certification program. All the Microsoft Office Specialist certification programs require candidates to clear an exam to acquire the certification. The exam is of 90-minute duration and consists of varied questions. If you have attended the classes and did the thorough practice then clearing it would be a tough nut to crack. However, proper guidance is always crucial. How to get through the exam? Depending on the skills and level, the Microsoft Office Specialist Certification is divided into three categories: Specialist Expert Master Each category has a thorough curriculum and demands utmost dedication from candidate’s side to get it cleared Microsoft training. While choosing the Institute for this certification, candidates should be extra careful as many institutes do not provide customer and Online-instructor support. There are institutes like Koenig Solutions with dedicated teams to mentor you and help enhance your understanding of the course materials. Who can go for Microsoft Office Specialist Certification? This certification is not industry-specific or bounded. Anyone can do this course as Microsoft implementation is vast and varied.   However, profiles like Administrator, Project Managers, Analysts, Marketing Managers, and HR Executives are in deep need of this course. The scope of this course is not limited to professionals. Students can also go this course as today’s education world also demands good operating system knowledge.   So, whether you are a skilled professional working in an MNC or a student pursuing studies, get a Microsoft Office certification and stand above your competitors.

Sushant Katoch

Sushant Katoch

AOC G2460PF 120 Hz over HDMI Testing

The AOC G2460PF supports HDMI 1.4. I will now demonstrate it operating at 1920 × 1080 @ 120 Hz over HDMI. These tests are performed with an NVIDIA GeForce GTX 780 Ti, which also only supports HDMI 1.4.   Display Settings Demonstration These settings show the G2460PF (EDID identifies itself as the "2460G4", Windows however does not read the name) connected via HDMI at 1920 × 1080 @ 120 Hz with full RGB color. A custom resolution was necessary to expose the 120 Hz option (CVT-RB timing was used, with a resulting pixel clock of 285 Mpx/s). Without custom resolutions, only options up to 60 Hz were available. Higher formats such as 144 Hz were also attempted, but failed. The monitor's HDMI port appears to support a maximum TMDS clock of approximately 300 MHz. Timing Parameters and EDID The EDID on this monitor reports a maximum of 170 Mpx/s, around the same as the maximum limit of SL-DVI or HDMI 1.2 (165 Mpx/s). However, in practice, the monitor's hardware works up to around 300 Mpx/s. Several custom resolutions were attempted. 1920 × 1080 @ 120 Hz worked with both CVT-RB timing (285 Mpx/s) and CTA-861 timing (297 Mpx/s), but anything above this point resulted in a black screen with a floating "Input Not Support" text. I attempted 1920 × 1080 @ 144 Hz at 317 Mpx/s without success, and even 138 Hz with a pixel rate of 304 Mpx/s (shown below) was rejected.

This monitor makes a good demonstration for two important points: The maximum limit of an HDMI device can be any arbitrary limit that the manufacturer decides, or that the hardware is capable of. It is not simply "a device can support either HDMI 1.4 speed (340 Mpx/s) or be limited to HDMI 1.2 speed (165 Mpx/s)", or anything like that. The limitations can be anything, and may differ on every individual model. The limits listed in the EDID are simply values typed in by the manufacturer. The EDID does not have some method of magically detecting the actual hardware capabilities of the display. The EDID limits therefore do not necessarily represent the capabilities of the actual hardware. Verification Of course, it is possible that the monitor is simply skipping frames, or failing to truly operate at 144 Hz in some other way. Some form of verification would be desirable. Verification By Oscilloscope This is measured using a Keysight EDUX1002A oscilloscope and a Texas Instruments TSL14S light-to-voltage converter. A pattern of alternating black and white frames was generated by the blurbusters flicker test (https://testufo.com/flicker). Since oscilloscopes are designed for measuring oscillating waveforms, a set of one white frame and one black frame is counted as a single "wave" (indicated by the two vertical orange lines marking the boundary of "one wave"). For this reason, the frequency displayed on the scope is half the actual refresh frequency, and the displayed period is twice the actual refresh period. In this case, 60.00 Hz indicates 60 sets of black-white transitions (2 frames) per second, for a total of 120.00 frames per second. This demonstrates flawless 120 Hz operation. Verification By High-Speed Camera This is a high-speed video of the blurbusters frame skipping test (https://testufo.com/frameskipping) shot with a Casio Exilim ZR100 at 1,000 FPS. Each frame of video represents 1 ms of real time. The video is played back at 30 FPS, meaning that every 1 second of video shows 30 ms of time. At 120 Hz, the display refreshes at intervals of 8.333 ms. This means that we should see slightly fewer than 4 refreshes per second of video, which the video does show. This can also be verified more precisely by examining the video frame by frame and counting 8–9 frames between each refresh. We can also observe from this video that the display is operating properly, without any frame skipping. High-Speed Camera Complete Demonstration Just for good measure, this video shows the display operating at 1920 × 1080 @ 120 Hz over HDMI with the frame skipping test in a single take at 1,000 FPS.

Glenwing

Glenwing

ViewSonic XG2401 144 Hz over HDMI Testing

The ViewSonic XG2401 supports HDMI 1.4. I will now demonstrate it operating at 1920 × 1080 @ 144 Hz over HDMI. These tests are performed with an NVIDIA GeForce GTX 780 Ti, which also only supports up to HDMI 1.4. Display Settings Demonstration These settings show the XG2401 connected via HDMI on both ends at 1920 × 1080 @ 144 Hz with full RGB color. These settings are available out of the box without requiring any overclocking/custom resolutions. 1080p 144 Hz was in fact selected by default when the monitor was connected over HDMI for the first time, I didn't even need to set it to 144 Hz manually. Timing Parameters and EDID For 1920 × 1080 @ 144 Hz, ViewSonic has decided to define a set of custom timing parameters, with an effective resolution of 2026 × 1157 or a pixel rate of 337.0 Mpx/s, just barely within the 340 Mpx/s maximum of HDMI 1.4. Curiously, when connected via DisplayPort, the monitor uses slightly different parameters defined by the standardized CVT-R2 formula, 2000 × 1157 or 333.2 Mpx/s, which would also fall within the 340 Mpx/s limit of HDMI 1.4. However, these timings are not used for the HDMI connections for some reason.

The EDID reports a maximum pixel clock of 340 Mpx/s, the highest allowed by HDMI 1.4. The 1080p 144 Hz format is defined within the CTA-861 extension block.

The EDID is the same on both of the XG2401's HDMI ports, and 1080p 144 Hz works on both ports. Verification Of course, it is possible that the monitor is simply skipping frames, or failing to truly operate at 144 Hz in some other way. Some form of verification would be desirable. Verification by Oscilloscope This is measured using a Keysight EDUX1002A oscilloscope and a Texas Instruments TSL14S light-to-voltage converter. A pattern of alternating black and white frames was generated by the blurbusters flicker test (https://testufo.com/flicker). Since oscilloscopes are designed for measuring oscillating waveforms, a set of one white frame and one black frame is counted as a single "wave" (indicated by the two vertical orange lines marking the boundary of "one wave"). For this reason, the frequency displayed on the scope is half the actual refresh frequency, and the displayed period is twice the actual refresh period. In this case, 71.79 Hz indicates 71.79 sets of black-white transitions (2 frames) per second, for a total of 143.58 frames per second. Verification by High-Speed Camera This is a high-speed video of the blurbusters frame skipping test (https://testufo.com/frameskipping) shot with a Casio Exilim ZR100 at 1,000 FPS. Each frame of video represents 1 ms of real time. The video is played back at 30 FPS, meaning that every 1 second of video shows 30 ms of time. At 144 Hz, the display refreshes at intervals of 6.9444 ms. This means that we should see slightly more than 4 refreshes per second of video, which the video does show. This can also be verified more precisely by examining the video frame by frame and counting 7 frames between each refresh. We can also observe that the display is operating properly, without any frame skipping.

Glenwing

Glenwing

 

How to Read the Product Description

"My laptop only has an HDMI port. I need to connect to the DisplayPort input on my new 144 Hz monitor. I found this adapter on Amazon, can I use it to connect my laptop's HDMI output port to my monitor's DisplayPort input port?"   To answer this question, we must apply some reading skills:                   No, you can't.

Glenwing

Glenwing

 

Think beyond computer problem solutions than just knowing them

Fixing computers is always a satisfying thing. Especially after chugging away at it for hours or even days. And when you come across the solution, you tuck it away in your memory, notebook, or what have you so the next time the problem shows up, you can fix it again. But I don't think that's enough to really "master" the computer.   I'm having a feeling that a lot of people who are beyond beginners accumulate solutions to problems or at least know how to find them online and simply spout out the solution. While this is fine for minor problems, more involved issues tend also seem to get little more than a glance as to understanding why something is happening in the first place. I find this detrimental and growth stunting in some ways.   Since it came up recently, I saw yet another reply in a thread where the OP mentioned cloning drives and the reply was "just reinstall Windows, you'll have problems if you clone." The last time I had any care about trying to talk to these people, I was linked to an article to someone who cloned it and shared their nightmare story about things were broken, but they just reinstalled Windows and everything was fine. It makes me wonder... have these people actually cloned their OSes or are they just spouting rhetoric that someone else wrote? Because I'm looking at myself going "I've cloned a half dozen times, none of them had issues." And then there's the fact that cloning is a popular way for mass deployment of a system. It's simply much faster to copy the data over to a new drive than it is to go through the setup process.   Without poking at the solution presented, it breeds misinformation that spreads. I did do a somewhat haphazard test of cloning an OS and used it, and my research only led to one thing that might possibly be it and it's caused by something someone would likely do. But if the solution to cloning is "don't do it, reinstall Windows because cloning causes issues" and the person can't explain why, then I don't think whoever says that doesn't understand the nature of the problem or the solution. They don't have to go down the specific details like "oh because xxx driver is dumb and hard codes where it lives" or "yyy setting affects zzz thing that's dependent on aaa thing", but something that at least makes sense.   Why is this important? Because establishing a root cause is helpful to knowing if the same problem happens again and you try the same solution you always have but it doesn't work, you can try a solution that's related.   One example of this is when people report their RAM usage is very high, but when they add up the amount of RAM their apps are taking up, it's much less. Like Task Manager reports 15/16GB is used, but the apps only use up 3GB. People often go "you might have a virus" (possible) or "something is wrong with RAM" (also possible). However, when I see this problem, I ask OP "Can you check if your non-paged pool is high?" Why? At first when I looked into this issue, the one thing that popped up a lot was that the Killer NIC driver's were leaking, and this causes the non-paged pool to increase in use. And so far most of the time when someone reports this, they're using a Killer NIC, so a driver update usually solves their problem. The root cause might be "Killer NIC drivers leak memory, causing the non-paged pool to increase." But then I dug around with what the non-paged pool is: it's kernel space memory that can't be paged out. So it's possible the poorly written drivers of any sort can do this. So the solution in general isn't just "If you have a Killer NIC, update the driver", it's "update your drivers if you haven't, one of them may be leaking and the update may have fixed it". I also found out after some random testing that VMs usually use the kernel memory space, which may also cause a situation like this to happen.   In another example, I work for a company that does system critical software. That is, this software needs to be reliable as failing can mean serious injury or death to those operating the machinery that software controls or to others that happen to be around it. When a problem comes up and it's severe, we investigate the hell out of it to find root cause. We do not like solutions that work but we don't know the "why." We still implement and post the solution, but it's not very helpful if we don't know what caused it. Because if it happened once, it'll happen again, and the solution may not be a complete solution, it may just lessen the severity of the consequences. And if we know the root cause to that problem, it may start lighting bulbs in our heads that the issue may have led to other problems.   So the next time you come across a complicated problem and solve it, try to figure out why that problem happened in the first place. You never know, it may allow you to fix your computer without consulting Google.

Mira Yurizaki

Mira Yurizaki

 

Version 0.2.7 TRIO

ABOUT Release TRIO is a pre-release that we have worked on since 01/16/18. We still haven't finished the program, which uses HTML, but we are constantly working on it. When we have free time, we get our computers and start to work on this. Release version 0.2.8 should be up very soon.    FEATURES The HTML program, as of TRIO release, has built in audio playing straight from the page without opening a new tab, embedded videos straight from our drives (There was some TechQuickie in there, but due to GitHub size limitations, the videos did not show up), PDF file viewing (still working on embed codes, they keep messing up), and Nav bars?(team idea)   SUPPORTED DEVICES  So far, we have found every device to support it, but if you have an android Root server, music links may not open.   LINK https://github.com/HpTechTips/Removable-HTML5-File-Explorer

HPTECHTIPS

HPTECHTIPS

 

HP Pavilion Elite Makes Me Money

Hey guys,   This is the pc I bought which got me into PC's and flipping them. I got this pc for 110 dollars.  It came with a keyboard, mouse, Logitech speakers, and monitor. This was when I barely knew that I could take the side cover off a computer. I really liked this computer, it had an i7 and to me, at the time I thought i7 was the best. I didn't even know there were different version and generations of the i7. Here is how this Hp Pavilion made me money. Before I sold the computer I gave it a fresh install of windows 10 because the previous owner didn't delete everything from it and I had to learn to do that. I ended up selling the computer for 130 dollars and just the tower nothing else. The reason I sold it was I picked up 30 dollar gaming rig which I was told worked, but once I sold this one. I got home and plugged it into the computer wouldn't show up on the monitor long story short it ended up just being the graphics card and I swapped it for a new one. I tried new cables and everything none of it worked besides a graphics card. In the end, I got to keep a monitor, keyboard, speakers, a mouse, and 20 dollars. This computer is was a real decent computer and did everything I wanted it too. I would recommend to anyone trying to just surf the web, watch videos, and even some light gaming. This PC could handle a lot more if it had an upgraded graphics card and maybe a power supply. The computer came with MSRP of $1,099.99 dollars. The computer started off with windows 7, but somewhere along the long it was upgraded to windows 10. It was really neat to learn from this computer and see what I would want/need in my next pc. Since this was such a base model computer, it showed and made me appreciate higher end rigs. This also goes to show you that anyone can really flip PC's and it's not hard to do at all.    Hp Pavilion Elite e9270f CPU: Intel Core I7 860 2.8 GHz MAX: 3.46 GHz RAM: 8GB 4x2GB MAX:16GB HDD: 1TB GPU: ATI Radeon HD4650 PSU: 350W MOTHERBOARD: MSI MS-7613     Hope you enjoy this little pc flip, catch you later!  

PortzJ

PortzJ

 

PC Part Lot

This a PC part lot that I got for 100 dollars and I have no idea if anything works.... I will find out later and post about it again! When I first pulled to this house to buy these parts, I was already a little sketched out considering the location I was at and I was somewhere I was not familiar. The person selling all these components had found them in the basement of a house that she was currently flipping. When I first got all of this, I was super pumped. Considering that all of my stuff I buy is second hand and i'm just starting out in the PC world. First impression was awesome. It looked like it all would work. Once I got home I realized how gross all the stuff. The graphics card had so much corrosion on it, i'll include a pic. The Gigabyte motherboard box had like rat poop in it. Some of the DDR2 ram had this bug in like a sac in it, that looked like a little worm. Some of the other ram just looked like someone put gel in it and touch it with dirty hands. The hard drive ports were a little destroyed.  I cleaned it all up and tried to get it to worked, but sadly it wouldn't. It was all pretty gross, I definitely washed my hands after this.    Memory (DDR3) -OCZ 2gb (2) -G.Skill RipJawsX 2gb (3) -Corsair Vengeance 8gb (2) -Bunch of DDR2 Ram   Graphics Card -Vapor-x Radeon HD 5770   Hard Drive -Seagate barracuda 1500 gb   Mother Boards -Gigabyte 78LMT-S2P -ASUS M5A 78L-M   Also got a evga 6 pin power cable   ********************************************************************************************************************************************************************************************************************************************************************************* UPDATE Sadly not all of this worked. I'll make a list of what did and didn't. I actually got my money back and just bought the Asus motherboard for 10 dollars and she gave me some of the ram back to see if I could get it too work. The graphics card she gave to me for free. In the end it worked out good. Memory (DDR3) -OCZ 2gb (2) (didn't work) -G.Skill RipJawsX 2gb (3) (didn't work) -Corsair Vengeance 8gb (2) (didn't work) -Bunch of DDR2 Ram (I didn't have anything to test it on)   Graphics Card -Vapor-x Radeon HD 5770 (Worked)   Hard Drive -Seagate barracuda 1500 gb (didn't work)   Mother Boards -Gigabyte 78LMT-S2P (didn't work) -ASUS M5A 78L-M (I assumed it works) ********Pictures of the components after closer inspection*******  

PortzJ

PortzJ

 

BUILD GUIDES 2018-01 Phase 1

$4000: https://pcpartpicker.com/guide/KWdnTW/4000-gaming-royale-build $3500: https://pcpartpicker.com/guide/QXBD4D/3500-gaming-royale-lite $3000: https://pcpartpicker.com/guide/FzxFf7/3000-gaming-x-placeholder $2500: https://pcpartpicker.com/guide/36V323/2500-gaming-x $2000: https://pcpartpicker.com/guide/RQnTwP/2000-gaming-x-lite $1800: https://pcpartpicker.com/guide/93Xscf/1800-gaming-sweet-spot $1600: https://pcpartpicker.com/guide/7zxFf7/1600-sweet-spot $1400: https://pcpartpicker.com/guide/VJXscf/1400-sweet-spot $1200: https://pcpartpicker.com/guide/mYBD4D/1200-sweet-spot-lite $1000: https://pcpartpicker.com/guide/8nqqqs/1000-mid-range

LienusLateTips

LienusLateTips

×