Jump to content
Search In
  • More options...
Find results that contain...
Find results in...


  • Content Count

  • Joined

  • Last visited


This user doesn't have any awards

About i_am_kingjonny

  • Title
  1. Anyone able to answer my question please? My research has hit a blinding wall. So I'm running 2 x5450s (released Nov 2007) and the articles are saying all processors made in the last decade. Does this process or make use of the speculative thing? And I use my workstation for audio processing rather than gaming. Am I likely to see a performance hit? Much appreciate any input.
  2. Thanks mate! I think it's actually closer to 250w at idle! I'll take away your greater experience though and come back again in the distant future when I've found some use for them other than NAS. At least in the winter I can save on central heating!
  3. Thanks for the reply bud. UK based. Storage was my primary aim so I'm glad I can at least form them into something useful. It's a shame I can't build them into a single powerhouse, but at £200 it's an inexpensive hobby for me to fiddle with. And for the sake of the 5 array card batteries (£40 each), 16 hard disks (£20/£30 each), 60gig ram (probably £50 total), processor upgrades (£65) to the audio workstation and other bits and bobs I don't feel like I've made a mistake buying them. On the subject of bitcoin and getting the machines to do something more worthwhile, as you can imagine I've discovered my HD7770 is by itself far better at the task than all 8 cpus (I already owned one of these for Audio processing) would be, plus a random GT9500 I had laying around. So given the boards have 8 slots in each I'm now looking at what I can shove in them to generate a profit. The dual (redundant) power supplies are 850 (peak at 1000) so could probably take a good number of cards each so long as they draw from the slot. (I had to use a molex adapter to get the 7770 to work which was a massive faff) Right now I'm looking at cheap 7750s on eBay. But there are these other peripherals (can't recall their designation) designed to work specifically on blocks; and they're all USB? OK so if I'm feeling rich and it's financially viable I can populate the slots with USB hubs and have a cat-o-ninetails of USB leads coming out the back of each machine, but are there any slot based solutions for bit coin other than Radeon cards? For the life of me I can't seem to get google to show me any if they do exist!
  4. So, will Xenserver work? It does seem (on initial research) to offer clustering capability and then run VMs? Though controlling a machine from the web would be a different story.
  5. Ok, so first off don't panic, I haven't actually got 3 working servers just yet, the other 2 are spare parts which I could turn into working machines for a fairly small investment. (why are VRMs more expensive than processors?!) Oh, and disclaimer, while I'm a bit brainy an a swift learner, I don't know a whole lot about networking and Linux has so far confused me with its sudo nonsense! So, speak clearly in concepts I can research rather than technical jargon if you want me to not cry. As the title says, I want to be able to turn 3 servers into 1 machine. If that isn't going to happen then stop reading here and call me an idiot for buying 100kg of stuff for no reason! Primary plan is utilise the 16 drive bays in 1 server for NAS, and allocate cores to VMs I can access through otherwise utterly useless tablet devices that can just about run a browser and cost somewhere approaching zero. These get dished out amongst different rooms to provide more chunky processing capabilities without having to have an actual computer in each room, or having to buy expensive tablets. Ye know, like starship Enterprise has one central computer doing all the shazam. Looking forward to Threadripper/Epyc getting cheap as anything in a few years' time! If the servers can be chained, obviously I'd end up with then 48 bays for drives which I'm sure RAIDz would be able to find a use for when it's bored. Secondary plan is to get the other 2 systems up and running and meld them all into one badass machine which can be used as and when I feel like it to do much more demanding tasks. Say video editing or encoding or other such nonsense. Yes, I'll probably have to slap some graphics capability in them but 680s seem to end up in the trash these days anyway. Tertiary plan is to spend left over crunching power doing something bit-coin shaped (if it's even worth doing any more) or even perhaps as a remote powerhouse that I can flog processing time on/donate to nuclear physics laboratories. Or, slightly more sensibly, give me the ability to continue composing/producing music over the internet because I'm not spending £3k on a laptop which is actually up to task. If I can't access my network (and thus VMs) from the internet then what the bloody hell is the point in the internet anyway?! Machines are (drum roll) ProLiant ML370 G5s. 1. 2x X5355s with 44GB(DDR2) and many many HDDs 2. 1x X5459 with 16GB(DDR2) and zero HDDs 3. A mother board. And a case that weighs about 40kg. 4. £200 Now, assuming what I want to do is at all feasible.... Questions! 1. Will I need to use the same processor in each machine? One assumes that paired processors are required in a 2 slot board (yes noob, I know, it's savage guess work!) and so would that mean I would thus need all the processors to be the same? 2. Will unraid or FreeNAS fit the bill? I'm quietly confident that some form of Ubuntu will manage the task but that would mean I need to know things and stuff, and that's not currently the case. 3. Can you convince Linus et al to do a video on it? Seriously, if you search youtube for 'daisy chain server' it's pretty much a googlewhack picking up nothing but noise and some woman getting series and parallel electrical circuits a bit wrong. He'd get a million hits anyway, the stuff costs sod all and it'll help usher in the future of houses which actually come with a computer installed in them. I bet even Bill Gates'd watch it! 4. Using unraid/FreeNAS, can the VMs actually access the NAS?! I know it seems like a stupid question, but honestly when I think about it the VMs would need to talk to the NAS as a network location, right? Would that make things get all confused? 6 (sod 5 that's boring). If there are many ways to daisy chain and it's not a myth, what's the best way? I can imagine a PCIe card that just connects the systems together on a primal level, but if it can only be achieved by Ethernet cable is 1Gbit/s actually swift enough? G. Is there a resource (video channel or some such) which speaks Linux/Ununtu but also translates into people? Level 1 Tech is great and they do seem to be the most likely, but can't find anything which'd give me an idea of what is and isn't possible. 8. If Ethernet is the way forth... Is It better to hook the machines to a network switch using 2x Gbit/s inputs, rather than one into another into another and then 2 feeds to a main router from first and last machine? 9. Can I add extra machines willy nilly? 'cause if I'm picking up more servers for dirt cheap and decide 24 cores is getting dull and boring and I've discovered some way to actually make money out of it, then I might want to waste even more time making a properly mental super computer which could one day take over the world! Yeah ok, it's unlikely, but a megalomaniac can dream! 10. The servers have iLO. Integrated Lights Out. What's that then? It seems to be a remote console thingy jobby but it doesn't work in Win10 and I have no idea what I'm doing with WinServer '08! Anyway, thank you very much for reading this internal monologue of mine and I do hope that at the very least my enthusiasm has brightened your day! Jon ps if it posts multiple times it's because my service provider I having trouble finding their own stable connects, let alone mine!
  6. Didn't read any of the other replies, but figured I'd give my experience. I have ML370 G5 with 8x slots running at 4x. Took a dremel to the graphics card (HD7770) and chopped off half the slot but it still didn't work. Turned out I needed a bus expander which took the bandwidth from the neighbouring slot to make up the speed on the 8x to full 8x. Needed to use an extension ribbon to get under the card but works out ok floating in the realms of the case. No detectable signs of decreased performance, but OCing has been a pain to the point I just didn't bother. Got 127% increase for a fair while (55celcius under load) but now it gets confused. Maybe I cooked it. Anyway, definitely doable, but make sure you don't chop off too many pins!!
  7. USB 3 card look like it's probably going to be the best long term option, and definitely a lot cheaper than a NAS system! The slots are 8x size at 4x bus speeds, that should be good for top rate transfer, right?
  8. Thanks for replying mate! No chance at all then? I was looking at USB enclosures but the ports are USB 2, so not fast enough to play 60GB odd of raw audio data across the line.in real time. Obviously with 2x 8 drive arrays I can allocate plenty of room for on-going projects, but then the transferring onto archived disks will end up being a very laborious process. Will keep a-thinking!
  9. Hey guys, First time posting, and though I'm not a complete novice, I will need a little teasing into the technical descriptions which will no doubt follow. A short history... I bought a second hand HP ProLiant ML370 G5 server a couple of years back (£300! Bargain!) to use as an audio (DAW) workstation. I'm running Win10 Pro. It's absolutely great for processing (less so for gaming, sadly) and I haven't got a problem running it. What I am struggling with understanding however are the RAID controllers (P800+P400) and how they actually work. Aside from the fact it wants me to update the firmware, that's another issue. So here's my question: I want to be able to remove drives from the machine after I've finished working on the material they contain and the music is published, to then store them wherever I feel like and replace them with a fresh drive ready for the next project. All fine, until it comes to plugging them back in again. As far as I can tell, to have Windows recognise the storage space it must first be allocated as a logical volume by the controller. But once the disk is removed and replaced (and another logical volume created), the controller seems to forget that the disk ever existed and though it does acknowledge the disk in the controller set-up, to actually allocate it as a logical volume again will destroy the data on the disk (or probably more accurately the allocation system no-longer acknowledges the data as present, I'm not sure). I guess this is because the array controller actually dictates where the data is kept while Windows is simply given a virtual version (which it reports as SSD) So, is there a way to have windows control the P4/800 directly and store the knowledge required to re-access them? Or, is there a feature of the P4/800 that I'm missing when it comes to changing drives? The bays (8x 2.5" vertical caddy array) are hot swappable, so in a perfect world I'd like to be able to use them much in the way one would a USB flash stick (software eject -> physical removal). Internet searches over about a year of generally trying to figure out what the hell people are babbling on about has left me no wiser, maybe I'm asking the wrong question. So, I figured where else to find a bunch of enthusiastic guys who could translate my query into an answer one way or the other. Right now I have about 2 disks left before I will need more space but ideally I'd like to allocate specific artists/projects to specific disks so really I need a fresh disk about now. NB, I realise HDDs do need to be spun up now and then during storage to help prevent degradation, but I'm not willing or even able to invest thousands of moneys into buying a secondary NAS array to start new projects. Your help will be much appreciated, thank you! Jon