Jump to content

scotartt

Member
  • Posts

    37
  • Joined

  • Last visited

Posts posted by scotartt

  1. i have a mac mini (6 core i7) i use it with this ridiculous 4K 43" LG and i love this monitor as i can sit away from it and also easily switch from the mini to the work MacBook as it has a million inputs. 

     

    I could have bumped the mini to the M1 except the i/o is junk on the current M1. I have to wait to see what they do with their higher tier mini. I use my current one mostly for recording audio. It's really not great for video editing in terms of graphics power, and there is no point in investing into external graphics in the macintosh world neither.

     

    I also wondered if they would offer users of the Mac Pro an upgrade path, e.g. imagine a thing half the size of the AMD radeon cards they have that's a multi-core M1x card. At this point you have your Xeon for intel workloads and the M1x for apple silicon native.

     

     

      

  2. 20 minutes ago, Dredgy said:

     


    Nope. I’m surprised this isn’t more widely known, especially as storage drives start to get bigger and the differences more noticeable.


    The drive IS 14 terabytes in size.

    Windows doesn’t actually show data size in terabytes, it shows it in tebibytes.

     

    It has nothing to do with formatting, 14 terabytes is equal to 12.73tebibytes.


    CDB3D330-29E7-4057-8706-E4199B673E62.thumb.jpeg.1189ea82eae512b968aa5c05e143a2cd.jpeg

     

     

     

    is this related to the "1kb = 1000 bytes vs 1kB = 1024 bytes" issue? (base 10 vs base 2 i guess)

  3. Make the iPhone/IPad run both iOS and macOS -- and make them automatically switch between the two when you drop it on a wireless docking station connected to a monitor, mouse, and keyboard. Maybe the docking station could have an extra A* series processor in it and a graphics card for extra desktop points as well as wirelessly charge the phone battery.

     

    Basically, make the "laptop" redundant. All you carry with you is your storage, a display, and a processor. enough to get portable work done, and then have the "desktop" experience when you're at a desk.

     

     

  4. We used an older model of these things for ramp staff at our airline. We are in the (nearly complete) process of replacing them with iPads.

     

    Re: car mounts. Could see people like refuellers (in a fuelling truck) using them.

     

    @Euchre -- that's a problem wherein cheap skate managers won't buy enough machines or parts to cover spares, moreso than the specs of the device. The same could be said of all the proprietary modules (imagine if your fingerprint reader died and your site worker couldn't login anymore). This is exactly the sort of thing where your onsite repair techs require either entire spare machines or spare parts. It is in fact one reason we are moving to iPads. They are easy to source.

     

  5. Triggerrd off the part 1 Mac Pro build video, I’m interested to know why the pc cooling community doesn’t seem to use  A/N fittings?

     

    We use them in automotive builds quite a lot (usually more for oil and fuel because a car’s cooling lines are like 2” typically)— but they come in pretty anodised colours. Plus, you can get hard corners 45’ 90’ even 180’ bends attached straight to the fitting point for those tight fits. I think they go down to at least 3/16” hose sizes too. 

     

    Obviously it’s probably overkill (like no high fuel pressures or 120 degree Celsius oil temps) but I don’t think ‘overkill’ has ever been a deterrent in the pc build community given some builds I’ve seen!? Maybe I should do a build project using only automotive cooling parts? Oil cooled? Lol. Will I need a scavenger pump for the turbo oil return if I don’t high mount it? ?

     

  6. 18 hours ago, Hi P said:

    So my plan as I write this is (not in a particular order, besides from Java being the first) :

     

    1) The basics of Java, I'm halfway through the course which includes and intro to JUnit, databases with SQLite, and Networking

    2) MySQL and MongoDB

    3) Spring framework

    4) RESTful services

    5) Git and Github

     

    That's a pretty solid plan. I'd put git a higher, like second.

     

    As for how you learn the software engineering side of things? After you've got the basics of your language out of the way, there are software engineering books (websites etc too) and probably a ton of courseware on sites like Brilliant and Skillshare and those sites that seem to be advertised on every second science and tech you tube channel. ?

     

  7. Advice for the OP.

    Spoiler

     

    1. If you want to do backend probably Node JS is your best bet. There are lots of online resources to help you learn it.
    2. Otherwise, I'd say take your pick of Java or C#. You can still write good backends in those languages, as long as you learn modern idiomatic style for them.
    3. You will need to learn about SQL databases:
      1. MySQL is dominant
      2. But postgresql is superior
      3. However, document dbs like MongoDB are the future, relational databases are tired and old.
    4. Do NOT neglect software engineering practices. Learn these as you learn the languages and their toolsets:
      1. Software engineering practices are the difference between someone who can code, and someone who can program.
      2. Software engineering practices are the main way i evaluate the performance of my developers.
      3.  
    5. This means git for source control.
      1. Yes lots of places till use svn or even, god forbid, cvs, but git is your best best.
      2. Do not be afraid of using the git command line. In fact, learn it before you go onto fancy GUI tools.
      3. Get yourself a github and/or gitlab account.
      4. Have a look at what open source projects you'd like to contribute to.
    6. Tests. Tests. Tests. Tests are never optional. Tests are your lifeblood. Tests are your safety line.
      1. Write unit tests in lockstep with the code.
      2. Do not make the mistake of thinking "code first, test later".
    7. It's a lifelong learning exercise. I always ask prospective hires these questions:
      1. What new language did you learn in the least year?
      2. What are its good points?
      3. What are its bad points?
      4. What would you use it for?

     

     

  8. On 8/20/2019 at 3:22 AM, Franck said:

     

    Javascript run only on the client side but some code i would consider it back end. An example would be threejs that does 3d matrix computation and lots of other more complex 3d thing. A Lot of it is back end stuff.

    Javascript is all over the back end nowadays. That's what Node JS is.

  9. On 8/31/2019 at 12:55 AM, wasab said:

    I'm actually running my website using Apache server which runs on top of Ubuntu server which runs on top of a hypervisor aka a docker inside the openstack. Programming lanaguge choice won't be an issue. 

    If you're running inside a Docker container blow away everything but the kernel and the app server infra that runs the service. E.g. Alpine Linux and Node JS. Don't bother with the Ubuntu and Apache -- its all a waste of memory and CPU. Run multiple docker containers (e.g. one for the database, one for the Node JS, one to host the front end) and use a another docker container to route the ingress traffic to the right end point.

     

    That's the beginning point of a "cloud native" architecture. Forget about monolithic unwieldy multilayer "servers".

     

  10. Devblox has good advice, and depending what you're working on (and what language and version), I'd also consider using lambdas:

     

    List<String> strList = getTheListOfStrings();
    strList.foreach(str -> { 
        printTheString(str);
    });
    
    // can also be abbreviated to
    
    strList.foreach(str -> this::printTheString);
    

     

    Javascript has got the same sort of thing in it.

     

    For brain-melting advanced programmer points, consider using Streams.

     

     

  11. On 8/8/2019 at 4:08 AM, Uttamattamakin said:

    Windows has the ability to mold itself, like clay, to whatever computer you put it on. If Windows can boot on a device it will TRY to make itself work.That is what a strong OS would do. 

     

    A strong OS is not so fragile that it will refuse to work unless you buy the hardware SPECIFICALLY FOR IT.   That is not a great thing about an OS.

     

    Gonna necro this thread a bit (three weeks). I think, that "macOS being unix at its core" and "what hardware the OS runs on" (which Windows is much better at) are two different things.  If Windows, in those days, was an option for me, I may well have stayed with it. At the time I switched, it wasn't. It was Linux or nothing, until I discovered that the Mac could run all the tooling I required at the time.

     

    Sure, I could get a pretty good laptop experience with Linux nowadays. When I switched (from a mix of Linux and Windows on commodity PC hardware) to Mac, it wasn't really possible to get a great out of the box experience with Linux on a laptop. I specifically switched because I got sick of having to pfaff about with drivers and the like to make my laptops function properly. I was working in a development environment where Windows (even with cygwin) just couldn't run the tooling necessary (the software was of course, deployed to Linux instances running in the datacentre (before that job, the software I wrote ran on AIX and IRIX (IBM and Silicon Graphics versions of Unix, respectively)). Switching to Mac meany I got all the tooling I needed, mostly the same as I would use on Linux. If I were to switch now, of course I could get a good laptop with a good Linux experience ... you know, but too late for that. ;-) Plus I'd still be hosed for the all corporate guff like Office 365 which I have to use as part of my job.

     

    My choice was: 1. be desktop only. 2. eternal struggle configuring Linux to run on a Laptop, and 3, get a Mac. Sure, the OS is hardware optimised for particular hardware. But from my 2017 Macbook Pro to the old 2010 iMac my wife still uses to rip Blu-Ray and DVD (hey, fair use purposes only there!), there's a pretty wide range of hardware. Even more if you consider that earlier versions of the OS ran on non-Intel processors, and looks increasingly likely that they will run on them again in the next couple of years. The Darwin kernel (at the heart of iOS and macOS) already runs on ARM (i.e. the A1-series Apple chips).

     

     

  12. 20 hours ago, Uttamattamakin said:

    I'm old.  To me Bash and SSH are just overlays on underlying tech which I used as a kid. 

     

    Bash is a program the emulates the old VT 100 terminal, which was hardware  from the 1970's,  so are pretty much all command prompts.  Many of the keyboard commands you are used to were first implemented in it as it was the first to adhere to the ANSI standard.  https://en.wikipedia.org/wiki/ANSI_escape_code#Platform_support , https://kb.iu.edu/d/acpy  

    The program is called "terminal"  WHY DID YOU THINK THAT WAS? 

    Telenet is the technology, behind the technology of the internet.  It is one of the original components of it which was available to ordinary people.  At one point a terminal program, a vt100 terminal emulator,  was the only way to get on the internet.  Porno took a long time to download in those days... mainly I wanted to see if it could be done. 
     

     

    Dude, i'm way older than you. You were ... 8 or 9 years old when i enrolled in computer science. When i built my first linux kernel (i.e. compiled it) I got a four-port serial card, put it in my PC, and then went dumpster diving up in the local tech park and came back with a couple of VT220 terminals which I RS-232'd to a couple of the serial ports. Instant multi-user local login.

     

    Oh, do you know why you login to a "terminal" it used to be assigned an identifier like "ttyS0"? TTY means teletype. So .. you know how "vi" has two parts, "vi" and "ed" (ed's the bit you get when you type esc-:)? It's like that because it's meant to run on a teletype. You would "ed" somefile, then type ':g1" and the teletype would print line 1. Then you'd edit it e.g. s/mispeling/misspelling/ and voila! it would print the corrected line 1. You could even  "cat" the whole file and it would proceed to print, literally, on the teletype. No need for "more" or "less", those programs didn't make sense on a teletype.

     

    Look at this command line output in terminal in a mac:

    $ w
    21:06  up 3 days, 19:39, 18 users, load averages: 1.38 1.54 1.64
    USER     TTY      FROM              LOGIN@  IDLE WHAT
    xxxxxxx  console  -                Sun01   3days -
    xxxxxxx  s000     -                Sun01    2:34 -bash

    It's still called a "TTY".

     

    I work in aviation, we still have data formats that are meant to be wrapped as "TYPE B" teletype messages. They are 5-bit (ONLY CAPS ALLOWED).

     

    Oh, yeah, also one computer i worked on had old-fashioned "core" memory (Linus shows that off in that Saturn-V computer video). those magnetic cores are why you'll still find a file called "core" written to the storage when Linux does a "coredump".

     

    Anyway, /bin/sh and its descendants like bash do not emulate the terminal. /bin/sh is a command interpreter, primarily. 

     

    lmao, kids today, get off my lawn, etc.

  13. 22 hours ago, tridy said:

    In Service Fabric, for example, there are several copies/nodes that are covering for each other, so if one goes down, another takes over it. All of this is running on the SF on the local machine. In the cloud different nodes will be on different machines, but that is handled automatically. There is no need to think vms, containers, etc.

     

    Oh right, now I gotcha. Yeah that's what we do with kubernetes and istio.

     

    The rest of what you wrote, I mostly agree with.

  14. 20 hours ago, Uttamattamakin said:

    In that case why have Unix on your personal computer at all?  Surely you can access a Unix Prompt on any one of those servers using old fashioned Telenet and a VT 100 terminal emulation program.   Use X11  to remotely run GUI applications.  Meanwhile, on your local machine you can be playing GTA Online at full speed.

     

    i'm talking about my local computer. my laptop.  it has to have a good bash shell. "telnet" (what, surely you mean ssh) and "VT100 emulation program", lol wut? That's called a bash prompt in Terminal.

     

     

  15. 20 hours ago, tridy said:

    Thanks for your comments, scotarrt.

     

    IDEs will depend on what frameworks, platforms, languages, cloud providers, etc. And there are different development groups. Some can show that their environment will run multiple, redundant, self-healing services on a single machine in a serverless environments without any containers "overhead" needed.

     

    I don't understand how a service would be redundant if it runs on a single machine? I don't quite get the point you're making here, I think.

     

    Quote

    From time to time I do look at Linux and I just cannot make it work for me. A command like here and a script there - too much command line for me.

     

    I do not think Git and Windows is a problem (anymore?) it is installed with Visual Studio if you need it. Visual Studio Code can do all the packaging in a similar manner. It is a matter of taste and learning how it works.

     

    I don't know, because the first time I realised it was a gigantic pain in the ass was the last time I ever tried to adapt a Windows machine for use a development environment.

     

    Someone else pointed out, 'what if worked at a company without Visual Studio'. That's true, maybe they do a lot of pair programming and need a standardised IDE so all the developers know the IDE on every machine. Or maybe they develop mostly in a language that isn't supported (Kotlin? Go? Python?).

     

    However, my take on this is slightly different: Never be reliant on the IDE. IDEs are nice and all, and provide tremendous productivity to developers (i.e. refactoring), but in my view developers who get stuck in that groove are often bamboozled when confronted with a problem the IDE cannot solve or one it created in the first place. This I've found especially true of visual source control tools. I never use the one in Intellij or Pycharm. I always use `git` on the command line. 

     

    I do sometimes use GitKraken to visualise what's going on with the branches, so that I can craft an effective strategy in a particularly tricky rebase or merge situation. But once I've done that, I'm usually running the commands with git in the shell ... and resolving merge conflicts with Atom, at most.

     

    Quote

     

     

    IMO, command like check out code, mvn, install ..., for Git + VS projects, the same can be done with like 5 clicks and F5. I am still surprised that people prefer typing commands instead of using visuals and UIs, but that's probably just me. There is a feel that you can and do more when using commands, I think.

     

    Call me old school, but I did computer science when my school required most of its assignments written and submitted from your shell account on a SunOS server with dial-up access.

     

    I think the logic with the command line runs like this: while a GUI feels more intuitive at first, and has an easier learning curve, however in the long run the command line is more flexible, more powerful, and most importantly, it's also composable:

    $ cat infile.txt | acommand --option --doitproperly | anothercommand --works --verbose | finalcommand > outfile.txt

    This is a large part of what I mean about the Unix Philosophy. Write small programs that do one thing well, and use a standard way to get input into and output from those programs.

     

    Macs, because of the shell, and the underlying BSD roots, have this, and they have the GUI niceness. Linux has the former, and Windows has the latter, but in my experience, only Macs have both.

     

     

  16. 19 hours ago, Uttamattamakin said:

    That's where the super OS comes into play.  Install Linux as a dual boot on a reasonably chosen windows comp*.  Then virtualize that partition.   Here's a nice friendly video about it.   This is NOT RECOMMENDED FOR PEOPLE WHO ARE CASUAL USERS.  

     

    None of that is going to work in a corporate development environment. Dual boot, seriously? Nahh.

     

    For say... using CUDA to accelerate heavy duty calculations.

     

    If i want heavy duty calculations I will spin up as many EC2 instances, or Lambdas or whatever as I need for my load.

     

    Building hardware to match your peak requirement and then leaving it idle the other 90% of the time is tremendously old school. Disposable, automated, composable, and scaleable software containers which are spun up as load increases and thrown away as it decreases is how you build big scalable systems nowadays. That's how Netflix, Facebook, Google, and Amazon build at their scale.

     

  17. On 7/25/2019 at 1:54 AM, Justin092 said:

    One question I've had, is how many programming languages do you know(use the most/use for work)? 1, 2, 3........20?

     

     

    Learn a new language every year. 

     

    Seriously, the languages I *use* use are ... Java, Javascript, Python, and /bin/sh

     

    But I make sure that at least once I year I've taught myself at least the basics of a new language, and written something in it even if trivial and throwaway, Also including the stuff I learned at university that list expands to:  C, Pascal, Smalltalk, Miranda, Basic, Ruby, SQL, Ruby, Perl, Objective-C, Swift, Kotlin, Go, and Scala. There are probably ones I've forgotten. And that doesn't include things like YAML or TeX.

     

    Haskell's on my list for next year.

     

×