Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

straight_stewie

Member
  • Content Count

    2,021
  • Joined

  • Last visited

Awards


6 Followers

About straight_stewie

Profile Information

  • Gender
    Male
  • Location
    North Mississippi
  • Interests
    Audio, Programming, Engineering. Just a hobbyist now, unfortunately.

Recent Profile Visitors

4,151 profile views
  1. That's one option. You could also just write a python script and run it in blenders console. Usually you would only make an add-on if the "thing" needs a lot of user input, needs to look professional (i.e. if you distribute it with your name on it), or needs to blend seamlessly into a console-free workflow.
  2. That's still not a valid reason. This forum doesn't require users to verify their real identity...
  3. Each frame of an animation can be thought of as a completely unique model. If you can't find a better solution, you could export each individual frame in whatever format your 3D printing tools can accept. You can do this manually by selecting the frame you want to export as a model, and then manually exporting that frame. Then you could do this for each frame. For any meaningful animation this would be an absolutely insane amount of work. However, you can do that with Blenders built in python scripting console, you won't even have to write an add on, unless you want to for future use. The Anim operators will help you to move frames, and the Export Scene operators will help you to export scenes as .obj files. You can also select individual objects and export those, if you don't need to export the whole scene every frame (for example, your "world" and your animated model are in the same scene). This is the launching point for those who wish to learn how to automate workflow tasks with Blender: https://docs.blender.org/manual/en/latest/advanced/scripting/introduction.html
  4. I would argue that virtually no one who isn't turned off of cloudsourced VPNs for the other reasons I mentioned is willing to or even knows that you can buy cryptocurrency without going through an exchange.
  5. A VPN cannot enforce encryption between you and your target receiver, it can only enforce encryption between you and the VPN server. After exiting the VPN server, all of your data appears as it would without the use of the VPN, except possibly some changed headers. This really just means that cloudsourced VPNs do not provide any benefits to the end user, except possibly subverting region or IP blocking strategies. Additionally, as this article would suggest, users have no idea what the VPN service is actually doing. That's one of the fundamental truths of computer security: If your data/machine ever leaves your possession, it is no longer your data/machine. There are only two useful use cases for VPNs, and they happen to be the use cases that they were primarily designed for: The VPN server is running on a machine physically connected to your local network and is used for secure remote access to your network. The VPN server is provided by a third party, making it useful only to subvert region/IP blocking, internet censorship, and transparent DNS proxying. It should be noted, however, that this use case comes with a whole host of caveats about how secure your communications actually are. Any other uses amount to security by obfuscation which is widely regarded as not actually providing any security. Just as a final nail in the coffin for "security by cloudsourced public VPN services", they have to store your payment data somehow, and that trail will always lead to one end point: The actual account that the money is coming out of. This means that you not only have to trust the VPN provider with your data, you have to trust them with your bank account, and therefore, your actual identity.
  6. I'm not sure of one that already exists. I plan to post a thread about it when I get to that stage with my new build, but it'll be a while. You should understand though that this method is very tedious and it will take multiple weeks to arrive at the final overclock. For example, some tests in MemTest86 can take upwards of eight hours per run.
  7. This is my order of operations for every new build: Run the most comprehensive test available in the free version of MemTest86. Install the OS of choice for the build. For a build like that it's bound to be Windows. Install your Burn-in test of choice and run it. Remove all bloatware and configure Windows to be useful. Setup any initially required software: drivers, management utilities, GPU overclocking software, comprehensive benchmarking software, Firefox. If I'm going to be overclocking, this is when I would start my overclocking loop. Change a setting, test for stability, record the change and the full benchmark suite scores, and all of that. Even if you're not going to be overclocking your processor, you should still do a loop where you bump up your memories XMP setting, then check for stability, exiting the loop when you've reached the highest stable XMP setting. I recommend doing this even if you're not overclocking because it's essentially free. If you're going to be overclocking: After doing that, if I'm overclocking everything, I would first do my overclocking loop on my processor, then my GPU, then my memory again, and then back to the processor. If any changes are made on the processor the last time through, I will visit the GPU and then the memory again. If any changes are made on the GPU or memory in this final run through, I will go back through the whole process again, until no changes are made. The assumption that makes this overclocking technique work is the observation that performance usually starts to decline before things become so unstable they no longer work, and that we can catch this by graphing our benchmark results every time we make a change to our overclock settings. However, these may be local maxima (the GPU may be held back by the CPU which in turn could be being held back by the GPU, for example, all of which could in turn be being held back by the memory or holding the memory back itself), hence we loop through everything atleast twice. An optimal overclock is reached when all constraints (power consumption, maximum voltage settings, maximum temperatures under load...) are upheld and no changes where made on the last run through the loop. This technique also lets you drill down into more and more advanced settings as you go through the overclocking loop. The point of this technique, although tedious, is that you can get a fairly large and extremely stable overclock without having to give up features like speed stepping, all while minimizing the amount of voltage that has to be added to gain a stable overclock, which increases component longevity. This can yield "daily driver" overclocks that can be competitive with competition overclocks (well, those that use the same cooling solutions you are using atleast.)
  8. I absolutely love posts like this, because they make an excellent time to point out that the only valid use for a public VPN is exactly this use case: To bypass geographical restrictions or internet censorship. A cloudsourced public VPN can never provide you with any form of security. This is also a great example of why it's absolutely necessary to log all connections on your network and to regularly review those logs.
  9. I never could figure out why people wanted every single thing to be connected to the internet: That's actually a really bad idea. I mean, now we are connecting one of the most private things ever.
  10. There are two major roadbloacks here, one technological and one financial: If the Arm G77 IP was useful for your target demographic, then it would already be being used in that demographic. When Arm says it's targeted at mobile they mean phones and tablets, not mobile PCs. If you have access to a billion dollar loan you are either the child of a billionaire or you already have a billion dollars of high performance assets to gain leverage with. If you've solved the second roadblock already, then I would suggest that you consider entering the RISC-V market instead of the desktop graphics market. The desktop graphics market is rife with competition from two massive and well established companies right now, and that means that those two firms, Nvidia and AMD, are pumping money into their desktop graphics solutions groups: You will not be able to compete with them on a billion dollars credit and no positive revenue, unless you have already made extremely significant innovations at the silicon level. These are 100 billion+ dollar companies that can afford to hemorrhage insane amounts of money on the high performance GPU market to combat their competitors if it becomes necessary to do so. Nvidia and AMD will definitely crush any new startups in the desktop graphics processing space, and if they can't crush them because they've made significant game changing innovations, they will just buy them. However, the RISC-V space is a market that is just starting out. With a billion dollar commitment you have more than enough money to start a market leading firm, if you can find high quality chip manufacturers that are willing to work with you. You still likely won't be able to manufacturer desktop caliber processors in house with just a billion dollars credit to work with: Advanced semiconductor manufacturing is overwhelmingly expensive, especially when you don't already own the equipment necessary to build the physical chips. But, you might be able to design market leading RISC-V IP. In this business model your primary competition will eventually be ARM, that is, if the market segment actually becomes viable.
  11. I found this to be an accurate description of the game, based on my first playthrough.
  12. A VPN device meant to connect your remote employees to your internal network has no benefit to your business at all if it's cloudsourced. Literally all it does is cost you money. Let's think about how a VPN works: You setup a connection between yourself and a remote server. The connection is configured to automatically encrypt all data between you and the remote server. You configure your machine to route all interesting communications to the remote server. You configure the remote server to decrypt your communications and pass along the packets with new headers (metadata) to the other network it is connected to. In your example of a cloudsourced VPN, the "other" network the VPN is connected to is just the internet. So when a remote employee would want to access your local network, here's what would happen: The remote employee configures their machine to route the necessary traffic through your cloudsourced VPN. They login to your network: They send encrypted data addressed to your network gateway to the VPN. The VPN decrypts said data and sends it over the internet to your network gateway. Your local network gateway then thinks that it's the VPN server that it's talking to. Your network gateway sends decrypted data addressed to your VPN server. Your VPN server encrypts the data and sends it to the remote employee. This is really bad for you, as it offers your local network no protection. For a VPN to provide your business network any protection at all, the VPN server must be running on a device physically connected to your network. There are some things you can do that might be free, or are otherwise less expensive than you might think: If you have a firewall acting as your network gateway, check to see if it can be configured to also act as a VPN server. This is very common with enterprise class network hardware. If you do not have a firewall acting as your network gateway, consider getting one. They can be built relatively inexpensively out of spare or used hardware, making the only significant expense the Network Interface Cards. According to NetGate pfSense community edition "remains a free and open product available for your personal or business use", as long as you don't turn around and try to sell the pfSense software, that is. To put it shortly, if you can build your own gaming PC, you can build a relatively inexpensive device that runs pfSense and acts as a network gateway (for internet access), a firewall (for security), a VPN server (for remote employee security), and a router/switch (for convenience and value in only needing a single device for a small network). This, in combination with @dalekphalm's more in-depth answer about Mobile Device Management would make for a good start to securing your local network when it's being used by remote employees.
  13. I'll admit, I have my fair share of gripes with Windows. But I've got some gripes with Linux too (hence why I use FreeBSD every time I think about using Linux) But in 25 years of using various versions of Windows, I have never experienced the type of behavior you describe, excluding Vista. As far as performance hogging goes, I occasionally experience an interrupt storm from File Explorer, but that's only because I have it configured to automatically index my main working drive periodically so that I can have increased file system search performance. I guess YMMV as this tangent would suggest.
  14. First, try this: // change: dim i as single // to: dim i as integer as @mariushm pointed out. Floating Point performance can be drastically different between different models of processors, even when those processors are in the same generation. Additionally, it's just bad practice to use a floating point number as a sentry value in a loop. We like to think that floating point numbers are able to represent every real number, but they most certainly are not. As a result, a number like 100,000floating point may end up actually being a number like 100,000.0002 or 99,999.9998 and you could logically miss your comparison against 100,000integer . There are also issues with comparing floating point types to integer types in the hardware of the processor itself, which could cause you to miss a comparison between a floating point number and and integer when they should actually be the same. Beyond that, floating point operations are just damned slow: Modern consumer oriented Intel processors processors only offer 30-40 billion floating point operations per second, and somewhere around 1 trillion non-floating-point operations per second. If i were an integer, and you still have the same performance decrease, then I would venture to guess that it's a configuration problem, as single threaded non-floating-point performance hasn't increased as much over the years as people like to think it has: Are both machines running the same version of VBA? Are both machines running the same version of Excel? Are both machines running in the same performance/power mode? Are both machines using the same version of Windows? Do both machines have enough memory to not cause a bottleneck?
×