Search the Community
Showing results for tags 'debian'.
-
So around Christmas I bought a used GTX 1060 3GB for Folding at home, but it refuses to fold. According to the log file the FAHClient isn't detecting the card properly, it says "No CUDA or OpenCL 1.2+support detected for GPU slot 01". I tried installing the driver that nvidia provides, but I can't get past the Noveau driver that just doesn't want to disable, because when I reboot the system it just comes back. The Nvidia installer wants me to reboot so it can put files into the Noveau drivers directory to disable it, but this specific action makes the Noveau driver come back(?). The GPU is functioning as far as I can tell, it displays everything on the screen properly. By the way, FAHControl doesn't work on Debian 12 so I have to configure everything in the commandline, which has been erasing my sanity for the past few weeks (I'm fairly new to using debian so my knowledge is pretty basic). System specs: CPU: Core i7 4790K RAM: 16GB DDR3 GPU: GTX 1060 3GB OS: Debian 12 "Bookworm" F@H Version: 7.6.21 Any ideas on how to fix this will be greatly appreciated! EDIT: The CPU is folding with no issues btw. log.txt
- 39 replies
-
Honestly, I remember being pissed at smartphones for so long, and at first it was because of the fact that I thought they should only be a decade behind mid-range gaming laptops, and yet their game library sucks. I now learned something different after forcing myself to daily drive an HTC ONE M8, which I initially bought as a way to justify smartphones having a crap gaming library, and that was to buy a phone that is hardware incapable of any form of modern gaming and being weaker than the Switch, so it would justify me not getting good games. That ultimately wasn't it. I actually found the real reason I hated smartphones, and it was actually because Android 9 and above are the very downfall of Android. You see, back when Android was using the first incarnation of Material Design, now retroactively named Material 1, Android was actually very good. It had everything that other mobile OSes did not have. It had widgets, emulators (to make gaming even possible), customization, file system access, IR blasters (certain devices only), call recording, and I think I missed something. Unfortunately, modern smartphones running Android are now missing everything except emulators, but restricting file system access actually also restricts emulators too, now making Android devices worthless. No wonder people now want to switch from Android. Remember the good old days when people switched to Android? Technically, most of those things are more of a device maker/hardware problem than with Android itself, but still, Android phones are getting worse, and it is clear that the Golden Age of Android is long over. A lot of this all started with Android 9 restricting call recording, making it near impossible. However, while Android sucks now, for the longest time, I actually had a really hard time finding any decent alternative. iOS is still out, because I really, really hate Apple. GameOS on PSP isn't even able to connect to a cellular network, and its successor, whatever the PlayStation Vita's OS is called, only supports 3G, which is now pretty much completely gone from all carriers. Really, my only option was to use old Android. However, there is a problem with that, old Android has a limit to how long it is usable. Currently, I am using Android 7.1 on my HTC ONE M8, but soon, that will not usable as app developers are planning on either going 64-bit-only or require Android 8.0 or later. So I came up with four options for how I can keep using smartphones like everyone else: Screw connectivity, just use old Android no matter how well it is supported! Hell, even use Android 4.0 just as a big middle finger to modern Android. Lose support for so many games and apps, not that I play any online games, so it will mostly be Ren'py games and a few other offline games that do not work. Lots of internet-connected utilities like Google Play Services, Discord, and web browsers will either not work or have problems. Switch to Mobian with Plasma Mobile, a mobile OS that just seems too good to be true, so it naturally basically only supports the OnePlus 6/6T. Other devices it supports use lower-end hardware, so I will basically be forced to use the OnePlus 6/6T. Buy a cellular laptop! Great, how do I even find one? Screw cell phones entirely! I should just convince my family I do not want one or need one. I will do everything on my laptop (or at home on a desktop PC). That is my ultimatum! No more modern Android, I am sick of it! And no iOS! And while we are at it, can I get an exhaustive list of everything normies do on their smartphones?
-
Hi all, I've got some devices on my LAN that I have blocked access to the Internet, but would like to let them send emails via Gmail (or another email service) I've spent hours faffing with Postfix and EXIM4, but have not managed to get anything working. I got closest with EXIM4, but it was complaining as it was unable to resolve my domain name. ( - I do have a domain, but I do not want to have to expose my server to the internet directly for this purpose and it's already being served by a mail server) I am looking for a solution that will: Receive emails from my LAN devices Send them to my public/internet email address via Gmail SMTP Run via Docker if possible Am I looking for something impossible? Thanks in advance!
-
Hello, I am trying to setup bridged network for qemu VMs, but can't make to work it correctly. if i setup bridged network via network manager (nmtui), qemu guest VM cannot obtain an IP: /etc/NetworkManager/system-connections/Wired connection 2.nmconnection: [connection] id=Wired connection 2 uuid=<uuid> type=ethernet interface-name=enp3s0 [ethernet] [ipv4] method=auto [ipv6] addr-gen-mode=default method=auto [proxy] /etc/NetworkManager/system-connections/enp3s0.nmconnection i created via nmtui [connection] id=enp3s0 uuid=<uuid> type=ethernet interface-name=enp3s0 master=<uuid> slave-type=bridge [ethernet] [bridge-port] If I setup bridged network via /etc/network/interfaces.d/ it does work in both host and guest vm, but in host system: Network manager DE GUI show " no connection" pc is connected to internet via bridged network br0 (and as result do not receive static IP configured for enp3s0 MAC-addresse. enp3s0 interface is listed as unmanaged /etc/network/interfaces.d/br0 (created via cli) ## DHCP ip config file for br0 ## auto br0 # Bridge setup iface br0 inet dhcp bridge_ports enp3s0 How is possible to create bridged interface working in qemu guest systems and host system connection stay managed and show up in Network Manager DE GUI?
-
Hello everyone, Ive run into an issue not being able to backup the Config-Files of my pawelmalak/flame Container, since the path I configured a while back seems to be empty: noah@flamedashboard:~$ ls /vw-data noah@flamedashboard:~$ ls -a /vw-data . .. I have not noticed this for Months, since I was able to configure the Dashboard via the Webinterface, with the Changes being persistent between Reboots of the Host and stopping of the Container. Updating the Container resulted in it being reset to the default Config, so I just restored a Snapshot for now, thats also when I noticed that something was wrong. Here is a Sample of the Output when inspecting the running Container: [ { "Id": "07962ecefb2199e08f4aed8e9c0ee37862a74a715bdabc4fc2401ea25542bd18", "Created": "2022-12-19T19:19:23.349867838Z", "Path": "docker-entrypoint.sh", "Args": [ "sh", "-c", "chown -R node /app/data && node server.js" ], ... "HostConfig": { "Binds": [ "/vw-data/:/data/" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "default", "PortBindings": { "5005/tcp": [ { "HostIp": "", "HostPort": "80" } ] }, "RestartPolicy": { "Name": "unless-stopped", "MaximumRetryCount": 0 }, ... "Mounts": [ { "Type": "bind", "Source": "/vw-data", "Destination": "/data", "Mode": "", "RW": true, "Propagation": "rprivate" } ], ... Specs of the Host running the Container: _,met$$$$$gg. noah@flamedashboard ,g$$$$$$$$$$$$$$$P. ------------------- ,g$$P" """Y$$.". OS: Debian GNU/Linux 11 (bullseye) x86_64 ,$$P' `$$$. Host: KVM/QEMU (Standard PC (i440FX + PIIX, 1996) pc-i440fx-8.0) ',$$P ,ggs. `$$b: Kernel: 5.10.0-23-amd64 `d$$' ,$P"' . $$$ Uptime: 1 hour, 31 mins $$P d$' , $$P Packages: 420 (dpkg) $$: $$. - ,d$$' Shell: bash 5.1.4 $$; Y$b._ _,d$P' Resolution: 1280x800 Y$$. `.`"Y$$$$P"' Terminal: /dev/pts/0 `$$b "-.__ CPU: QEMU Virtual version 2.5+ (2) @ 3.493GHz `Y$$ GPU: 00:02.0 Vendor 1234 Device 1111 `Y$$. Memory: 230MiB / 964MiB `$$b. `Y$$b. `"Y$b._ `""" If you need any more Details, please let me know. Thanks for reading!
-
So I've got three new drives on the way to upgrade my server. The old ones are going to get repurposed or sold. My home server is running headless Debian that I've set up and configured the way I like and what I've been doing is: mdadm (RAID 5) -> LUKS encrypted container -> EXT4 filesystem This has worked great. I even converted it from RAID 1 to RAID 5 a few years back while the filesystem was still live and in use, and even forgot and rebooted it during that operation and it just picked back up where it left off. When I had a drive die the process of degrading the array and replacing the dead drive was simple and went without a hitch. The server has two primary jobs, Plex and Nextcloud. The Nextcloud data directory and my Plex media folder both live on the array. It's only ever accessed by a handful of people at once and my home network is just gigabit, so performance isn't the be all end all, but I would like to retain the ability to saturate gigabit networking when transferring large files. However, I'm considering using ZFS when the new drives arrive for the following reasons. - A lot of the features I'm getting through the use of multiple, layered solutions are all available directly thru ZFS itself. Instead of using mdadm for RAID, LUKS for encryption and then ext4 for the filesystem, ZFS would tick all those boxes all by itself. - The one time I did have a drive die while using mdadm, the array was unresponsive until I physically removed the drive. I don't know if this was because of the nature of the failure, or because mdadm wasn't willing to automatically mark the drive as bad and keep going. The failure was of the arm that runs the read write head where you could hear it knocking and almost bouncing inside the drive. Once I removed the drive and marked the array as degraded it worked fine on two drives until the replacement arrived in the mail, but I'm wondering if ZFS would have handled this more gracefully. I do have some concerns though with using ZFS. - I know the "1GB per TB of data" is not a hard and fast rule, rather it's just a rule of thumb for people that enable de-duplication. But I've got 24TB of data right now and will have 36TB of available space, but the system only has 16GB of RAM and can't be upgraded as that's all the motherboard supports. It's an old AM3 socket motherboard from Alvorix that's about 10 years old. Would this be a problem for a system that will be managing the storage AND hosting Plex and Nextcloud at the same time? It's working fine now, but I'm not sure if ZFS would cause issues. - How much of a hit on performance is the compression? Can it be turned off when creating the zpool? The CPU is an old 6 core Phenom II and it works fine now with mdadm and LUKS, but I worry that adding compression to the RAID striping calculations and the encryption might incur a noticeable performance hit. I'm just totally new to ZFS. I've known about it for a while, but have never implemented it myself so I'm trying to decide whether to pull the trigger. Since I'll be creating an entirely new array and migrating the data, if I'm going to make the switch, now is the time. Also, what about BTRFS? Would it be a better solution? I know it supports snapshots, checksums and such, but it doesn't support encryption (yet), which I want, so if I went with it I'd have to layer it beneath LUKS like I'm doing now with EXT4. Would that have any effect on its ability to do checksums or snapshots? I'm basically just looking for some knowledge and advice. I appreciated anything y'all call educate me on.
-
Hello, following guide, i am pretty sure what zerotier was installed and configured correctly. LAN and home network connections works fine, but any connection to Internet are broken from devices inside zerotier lan. Current setup is: Zerotier account: Gateway configuration: OrangePi with Armbian 23 Bullseye ip route: default via 192.168.1.1 dev eth0 proto dhcp metric 100 169.254.0.0/16 dev ztfp6azmws scope link metric 1000 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.95 metric 100 192.168.200.0/24 dev ztfp6azmws proto kernel scope link src 192.168.200.95 Iptables output: Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy DROP) target prot opt source destination ACCEPT all -- 192.168.200.0/24 anywhere ACCEPT all -- anywhere 192.168.200.0/24 iptables config: :POSTROUTING ACCEPT [0:0] -A POSTROUTING -o eth0 -s 192.168.200.0/24 -j SNAT --to-source <XXX.XXX.XXX.XXX external IP> COMMIT *filter :INPUT ACCEPT [0:0] :FORWARD DROP [0:0] -A FORWARD -i ztfp6azm -s 192.168.200.0/24 -d 0.0.0.0/0 -j ACCEPT -A FORWARD -i eth0 -s 0.0.0.0/0 -d 192.168.200.0/24 -j ACCEPT :OUTPUT ACCEPT [0:0] COMMIT port forwarding output (cat /proc/sys/net/ipv4/ip_forward) : 1 Firewall rules: PC configuration: From 192.168.200.XXX to 192.168.1.XXX everything work (access to samba and other local resources). And connection does not work to internet from 192.168.200.XXX via 192.168.200.95 (same thing from windows pc, from android phone). I do not understand why this bloody thing does not work
-
PROXMOX is a powerful hypervisor used for hosting containers and virtual machines. The Operating System is available for free while offering repositories that you can pay for with a subscription. This guide will go over How to install the OS, How to disable the subscription notice and enterprise repositories that aren't free (if you're not interested that is), How to configure your virtual machine pools, How to add a CIFS network server, How to download and install Templates for Containers, and how to install your first Virtual Machine. 1. How to Install PROXMOX 2. How to Disable the Subscription Notice and Enterprise Repositories 3. How to Configure ZFS Storage Pools 4. How to Save a .ISO file on PROXMOX 5. How to Bond a Network Interface Port 6. How to Add a CIFS Network Server 7. How to Download and Install Templates for Containers 8. How to Install Your First Virtual Machine 9. Hardware Pass-Through This concludes the PROXMOX Beginners Guide. If there's anything that needs revising or if you want something added just let me know.
- 100 replies
-
Hi, I'm a total newcomer to ZFS and only have a little bit of experience with managing a linux server. I started to set up my own debian 11 home server using ZFS for a little storage pool of three 8TB WD80EZAZ in raidz1, which would be used for hosting a timemachine backup, my media, and some other larger files, with a jellyfin and samba docker container for access, for me and my two flatmates. I shucked these out of WD My Books I bought 5 years ago and previously they were just internally mounted on my personal Windows machine without any noticeable problems. But as I put them in my new server, made a zfs pool out of them and started to copying things over, it started to behave weirdly. My datasets were the following: fuzzel@lapis:~$ zfs list -H -o name data data/media data/media/person1 data/media/person2 data/media/person3 data/share/private data/share/private/person1 data/share/private/person2 data/share/private/person3 data/share/public data/timemachine fuzzel@lapis:~$ zfs get -s local all data mountpoint /data local data compression lz4 local data atime on local data aclmode groupmask local data aclinherit passthrough local data xattr sa local data relatime on local data/media recordsize 1M local data/media compression off local data/timemachine recordsize 1M local data/timemachine compression zle local data/timemachine atime off local data/timemachine devices off local data/timemachine exec off local data/timemachine setuid off local data/timemachine aclmode discard local data/timemachine aclinherit restricted local data/timemachine xattr sa local data/timemachine refquota 500G local data/timemachine logbias throughput local data/timemachine sync disabled local data/timemachine dnodesize auto local data/timemachine acltype off local data/timemachine relatime off local At first everything went smoothly and I set up a timemachine backup from my macbook and using rsync to copy around 5TB of videos into the data/media dataset without any problems. I then noticed that I forgot so set some properties on my dataset so I deleted my all datasets except data/timemachine, set the property, created the dataset again and started copying the media again. I wanted EA support for the samba shares so I added to data: xattr=sa acltype=posix aclinherit=passthrough aclmode=groupmask The second copy to data/media went again without a problem. But once I started to copy over other things into another dataset rsync just froze. Can't abort it, can't kill it. Not even sending a SIGKILL with kill -9 <pid>. While it hang, I tried navigate into the target dataset it was copying into. In all the parent datasets I could execute ls no problem, but in the target dataset where rsync froze even ls froze, again with no way to kill it. The only other option was to reboot which expectedly threw [FAILURES] during shutdown that it couldn't unmount the datasets that were freeezing. After a reboot I started to run a scrub on my pool and I started to get couple dozen checksum errors. Most of them could be recovered from but a few caused a couple of media files I copied over to be corrupted. So I complete destroyed the pool, purged zfs from the system, did a full system update to debian 11.7 (was on 11.6), reinstalled zfs, created the pool and this time **not** setting the additional properties I forgot the first time (maybe because they were the cause?): Started copying again. This time I didn't bother with the 5TB of media and started with the other files that first caused rsync to freeze. Again it froze. But now it kinda got worse that no matter what dataset I executed ls in, it froze. I was wondering if the freeze of rsync was actually just a display error and if it was still copying in the background so I tried to ran iotop which also froze. I hopelessly tried to kill the freezing processes again so I executed ps -aux to get the PIDs which now also started to hang in the middle of printing out all processes. Luckly at least ps was stoppable with a simple ^C. After that I had enough and started to investigate the drives more closely with a smart test and badblocks. I used this script: https://github.com/Spearfoot/disk-burnin-and-testing which does: Short SMART test badblocks Long SMART test Both the short and long smart test reported no issues whatsoever but all drives reported a couple of dozens compare errors during badblocks (e.g. (0/0/33)) yet smartctl says that it has reallocated 0 faulty sectors. Does anyone have a clue what this could cause? Are the drives really faulty? Do I use a weird combination of ZFS properties? Has it something todo with compression (data/meda always copied without problems (but other datasets where lz4 was enabled causes freezes)?
-
Hello there, A week ago I made a mistake and ran some malicious software, which used an token logger to hack into my discord account. After finding out, whipping everything from discord from my SSD, I thought, I could finally switch to Linux for good. So I put in another SSD and installed debian on it (I already use debian at work so why not). The week was full of troubleshooting, having "fun" with Nvidia driver issues, Steam problems and so on. At one point, grub had a little hickup and then I think it happened. Now when I start my PC, I post very quickly to the BIOS Splash screen, then I stay like that for around 8 minutes, after that it goes into the BIOS when you pressed DEL during those 8 minutes or it goes to grub and boots the selected OS. At that point I was so fed up that I switched my boot order and made my windows SSD the primary boot device, but I still have an 8 minute boot to the OS and Windows itself takes another 2 Minutes to boot. When I'm in the OS itself everything is responsive, no suspicious activity, no load on my CPU/GPU, nothing. Then I decided to flash a new BIOS on my motherboard, but that didn't help either. I have now reached a point where I'm quite desperate and no longer have any idea what to do. PC Specs: Windows 10 Home 22H2 64 bit | Debian 12 bookworm Alpha ASUS ROG STRIX X570-E Gaming (BIOS version 4408, latest) AMD Ryzen 7 5800X3D (stock) 4x16GB (64GB) G.Skill TridentZ DDR4-3200 (DOCP profile 1 (3200 MHz)) ASUS TUF NVidia RTX 3090 OC edition (stock) Seasonic TX-1000 (1000 Watt PSU) 2TB XPG GAMMIX S5 NVMe (Windows Boot SSD) 1TB Crucial P3 NVMe (Grub/Debian Boot SSD)
- 6 replies
-
- bios
- long boot times
-
(and 3 more)
Tagged with:
-
Almost everywhere I go, I see ubuntu being called a Debian distribution. Debian began in 1993 by Ian Murdoch, ubuntu in 2004 by Mark Shutleworth. Ubuntu is derived from Debian. Since its inception, ubuntu has gone its separate way, becoming more commercia and incorporating non-free software. So, one is able to say that ubuntu is related to Debian and be correct. To say it is a Debian distro is not correct. Anyone who has tried both can verify that installing Debian is a PITA whereas installing Ubuntu is pretty easy. If someone has been using both for several years, he or she can also see the difference in stability. In short, ubuntu can be buggy and unstable, especially if other than the Long Term Support (LTS) versions of any 'buntu (kubuntu, lubuntu, ubuntu, xubuntu, et cetera) are used. Debian, once installed, is very stable. Which is why it is one of the preferred Server distros. I have included more or less chronologically links that show how the two differ, which is better for beginners learining about GNU/Linux and which non-ubuntu distros are suitable for newbies. I hope it sheds some light on why the two are not in the same category. Why Linus Torvalds Doesn’t Like Using Debian Or Ubuntu Linux? Debian vs. Ubuntu: Which One Should You Use? Debian vs Ubuntu – Main Differences and Similarities in 2020 Debian vs Ubuntu in 2021- The Ultimate Showdown 6 Best Linux Distributions That are Not Based on Ubuntu or Debian Choose what you may, the first main thing is to have fun! The second thing is to wean oneself from windows -- that is, should a person not be willing to spend hard-earned cash unnecessariy!
-
Hey, I recently installed Kali Linux with Windows 10 on dual boot. I have 3 monitors and 2 of them run on a 1650 Super and the other one runs on Vega 8 in the CPU. I use force IGD to make the integrated GPU to power a display. I left the settings same on Kali but only the integrated GPU is running and the other 2 display's are not showing up in the settings on Kali. Thanks.
-
- kali linux
- debian
-
(and 1 more)
Tagged with:
-
Hey guys, I'm running Proxmox 7.0-13, which seems to be based on Debian 11. Does anyone know how to get the Mellanox Drivers working? The latest driver seems to be only for 10.8. Im just wondering, if anybody else got this working. I just dont want to do the hustle of downgrading my proxmox version to 6.0 or even lower. Thanks a lot! Edit: I probably should have stated that you cannot use the lastest driver, it will simply not install, even when passed the "-force" argument.
-
I have a system with an i3 9100 (with integrated graphics) with 8gb ram. When I tried to install Linux mint, it did not install. I tried many other distros. But none of them installed. What Happened was the installer crashed. But when I used the same Linux mint bootable USB on a core 2 duo, it installed and works fine. Changing USB drives and ports didn’t work. What could be the reason?
- 5 replies
-
- troubleshoot
- linux
-
(and 2 more)
Tagged with:
-
I'd like to begin by saying all my data is safe and my drive is back to normal, but as a linux novice I really don't know what happened nor why my solution worked, which bothers me. Also, this post may be better suited to another subforum but I'm assuming it's linux related just because it's the least understood (for me) part of the process. I have had a Raspberry Pi 4 acting as a NAS using Samba on Debian (set up using PuTTY from a windows machine, and accessed as a mapped network drive in windows) for over a year now, using just the MicroSD as the storage device with absolutely no issues. Yesterday, I added a SATA SSD using a USB adapter. I used fdisk to create a partition, mkfs.ext4 to create the file system, and cp to copy all files from the MicroSD to the SSD. After mounting, I edited the smb.conf file to point to the SSD directory rather than that of the SD card. I was able to use this all day, reading old files, writing and reading new files as well. This morning, I couldn't read a file, then I noticed the vast majority of the files were missing. In Windows Explorer, the network drive read 0 bytes free of 50-something GB (it's was formerly reading correctly as a 230GB drive). I went into Debian and ran fsck -p on the device which, after a few minutes, appears to have completely corrected the issue. So again, all of my files are fine and they're also backed up. However, some insight as to what could have happened here would be greatly appreciated. Thanks in advance!
-
Wine 7.0 Install Instructions So you have decided that you want to install Wine 7.0 for Ubuntu, Ubuntu variations and Debian. Well this quick and easy install guide will take you over how to install it directly from WineHQ. As of the time of writing the Wine binary has not been updated on the official repository so the only way to get Wine 7.0 is to download and install directly from WineHQ. This is pretty easy with no issues. Preparing for install When are ready to install Wine 7.0, check to see if you have wine installed first. To do this open up a terminal window and type the following wine --version Once you have done this and established that you either have Wine installed or you are Wine-less we can now move on. Removing older versions of Wine This is pretty easy to do. Type the following into the already open command terminal that you have open. sudo apt-get remove --auto-remove winehq-stable Installing Wine This is the part where we install Wine. Follow this list of steps This enables 32bit architecture on your operating system allowing the use of said 32bit software. sudo dkpg --add-architecture i386 This step downloads the public key from WineHQ to allow the installing of Wine software. wget -nc https://dl.winehq.org/wine-builds/winehq.key This step adds the public key file to your keys file. sudo apt-key add winehq.key This step adds the Winehq repository for version 7 to the repository list sudo add-apt-repository 'deb https://dl.winehq.org/wine-builds/debian/ bullseye main' It is HIGHLY important that you do not forget the '. If you forget the ' It wont work and you will get a blinking >_ If that shows up just CTRL+C and correct the mistake. This step updates your system sudo apt update This step finally installs Wine7.0 sudo apt install --install-recommends winehq-stable This will install Wine 7.0 This will get the most update to date version installed. You will be able to run Windows software with no issues at all. You are likely wondering where the Debian version is. Well I made install scripts for all of this on my GitHub that cover both Ubuntu, Ubuntu variations and Debian versions 10, 11 and testing. You can also watch a guide video that I made if you don't want to use the install script. Thank you for reading this. I hope you have a good rest of your day! GitHub: https://github.com/Nmatt1/Wine7.0 YouTube:
-
Several vulnerabilities have been discovered in the OpenJDK Java runtime, which may result in denial of service, incorrect Kerberos ticket use, selection of weak ciphers or information disclosure. If your running Debian or a Debian based system you need to upgrade your openjdk-11 packages. https://www.debian.org/security/2021/dsa-5000
-
Hello! I have installed proxmox on my Dell R720, HP DL380P Gen8, and my intel R1304JP4OC. The problem I have is that the Dell and HP gets the error "Could not activate storage 'local-zfs', zfs error: cannot import 'rpool': no such pool available (500)". The Intel servers runs with software Raid, and the other two are locked to hardware Raid. I have tried to update initramfs according to a post on the proxmox forum, but it won't work. I can use a workaround by putting the boot drive and the rest of the storage as two seperate disks in the raid configuration, but this makes me unable to use 4 of the 16 disks that the servers got (2x8). Is it possible to fix this? Thanks in advance /Erik
-
Hey Folks, I installed STEAM on Debian 11. I can't start it because it needs to update dependencies. But also I can't run STEAM as ROOT either. I get messages from steam, saying it can't run as root user. Anyone else run into this issue before?
- 7 replies
-
- steam
- steampowered
-
(and 1 more)
Tagged with:
-
I am having a problem running sudo apt-get update on my 32-bit instillation of Antix 19.3. When I run the command and just "yes" my way through the prompts some things will update but some stuff will fail in ways that I'm seemingly not able to fix. I'm uncertain exactly what's going wrong and I was up until 2 AM last night searching the internet for possible causes and solutions with no luck. I'm guessing my knowledge is to limited to fully understand the error and thus I can't diagnose the issue. I am uncertin where the log's are kept so I just lazily copied and pasted my terminal window below: $ sudo apt-get update [sudo] password for HeroRareheart: Get:1 http://deb.debian.org/debian buster-backports InRelease [46.7 kB] Get:2 http://debian.mirror.globo.tech/debian buster-updates InRelease [51.9 kB] Hit:3 http://security.debian.org buster/updates InRelease Hit:4 http://debian.mirror.globo.tech/debian buster InRelease Ign:5 http://debian.mirror.globo.tech/debian/buster main InRelease Err:6 http://debian.mirror.globo.tech/debian/buster main Release 404 Not Found [IP: 74.120.223.25 80] Get:7 https://mirror.freedif.org/MXLinux/repo/antix/buster buster InRelease [27.4 kB] Err:7 https://mirror.freedif.org/MXLinux/repo/antix/buster buster InRelease The following signatures were invalid: EXPKEYSIG DB36CDF3452F0C20 antiX (antix repo) <repo@antixlinux.com> Reading package lists... Done E: The repository 'http://debian.mirror.globo.tech/debian/buster main Release' does not have a Release file. N: Updating from such a repository can't be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details. W: GPG error: https://mirror.freedif.org/MXLinux/repo/antix/buster buster InRelease: The following signatures were invalid: EXPKEYSIG DB36CDF3452F0C20 antiX (antix repo) <repo@antixlinux.com> E: The repository 'https://mirror.freedif.org/MXLinux/repo/antix/buster buster InRelease' is not signed. N: Updating from such a repository can't be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details. I can clearly see a 404 error, an invalid signature error, an error claiming a repository does not have a release file, and an error claiming something I tried to update was unsigned. I know what a 404 error is, so I assume this means some connection I need to be making is failing somehow. I have heard the term signing code before but I do not understand exactly what the is or how it works, I think it's some sort of security check to make sure code is safe but again I do not know enough about it. I was able to find this article on a similar repository does not have a Release file error and upon trying to go to the repository using my browser I got a 404 error. I then tried just http://debian.mirror.globo.tech/debian and that worked fine, it appears that the 404 error was caused by the total lack of a buster folder within the Debian folder. Upon visiting the repository giving me the invalid signature error there were no files in it, just two folders. Upon navigating deeper it looks like the url needs to be https://mirror.freedif.org/MXLinux/repo/antix/buster/dists/buster/ not https://mirror.freedif.org/MXLinux/repo/antix/buster/. All this said I do not know if I am correct and I do not know where to even begin with fixing this. Any and all help appreciated.
-
So I just finished upgrading the storage in my home server and I learned a few lessons that I thought I would share with you guys, so that hopefully some of you might find it useful at a later date. I recorded a lot of it and plan on writing it up in a blog and making a YouTube video about it. If/when I do, I'll edit this post with links to those items. First off, my server is running Debian 10 Linux in a headless state; no GUI installed, which means 99% of my management is done via SSH unless there's some kind of a problem that prevents it from booting so that I have to physically go connect a monitor to it, so everything I did was done via the command line. It "was" using two 12TB WD Gold drives in Linux software RAID 1 (mdadm). I created a partition with a specific sector count on each of those drives about 1GB less than the total capacity of the drives, and then added that partition as the RAID member. That way if I ever replaced a drive with one of a different model and the capacities were "slightly" off, it wouldn't be an issue. When creating the actual data partition on the RAID device, I encrypted the partition so that if I ever had to RMA a drive, nobody would be able to retrieve sensitive or personal information from the drive at a later date, especially since RAID 1 has a complete copy of everything on both drives, so one drive is enough to recover everything. The partition didn't used to be encrypted until I had my first drive failure and realized I was about to ship a drive with lots of personal files through the US Postal Service, and that in all likelihood they would refurbish the drive and resell it with the same platters inside since I'm pretty sure it was just the read head that died. I managed to get the drive working long enough to run shred on it for about 90% of the drive before it quit for good (which was fine because it wasn't 90% full) before shipping it off, but after that I decided to encrypt the partition. We started approaching the 12TB limit recently, and I decided that the easiest course of action would be to buy one additional drive and switch to RAID 5; which would double our storage capacity to 24TB and still allow us to lose one drive without losing any data. I have a separate off-site backup drive, so we're talking strictly about what's physically inside the server. My only concern was, I really didn't want to delete my RAID group and re-make it if I didn't have to. I have the backup, but it would have taken over 24 hours to restore from the backup since it's a USB external drive; and that would have left us without Nextcloud and Plex for that entire time (I make heavy use of Nextcloud and my kids and nieces/nephews love Plex), plus that would mean that until it was finished, I would only have one complete copy of my data, and I'd just have to hope my backup drive didn't die during the restoration. As it turns out, if you're using Linux software RAID (mdadm), a two disk RAID 1 can be directly converted to a degraded two disk RAID 5, at which point you just add the third disk so that it's no longer in a degraded state. All you have to do is run: mdadm /dev/mdfoo --grow --level=5 mdadm --grow /dev/mdfoo --add /dev/foo --raid-devices=3 The cool thing about this was that even though the reshape took almost two days, my data was accessible during that time, so we were still able to continue using the various services on the server. On top of that, once it was done, it automatically grew the array as a whole to the proper new size of 24TB, although I had read that in some cases you might need to do that part manually. The server obviously runs on an UPS, so if there had been a power outage or flicker during the process, it wouldn't have bothered anything. I even rebooted the server once during the reshaping process and it just picked up where it left off once it was back up and running. My next issue was that, even after the many hours it took to reshape the array into a proper RAID 5 that spanned all three disks, the actual data partition on it was still only 12TB. The partition was encrypted, so my next task was to resize that partition to take advantage of the increased capacity of the RAID pool. After doing some reading and experimenting in a VM, I discovered that once the reshape was done, all I had to do was unmount and luksClose the encrypted partition, then unlock it again with luksOpen, and LUKS automatically extended the LUKS device to fill the new capacity of the RAID array. Then I just had to do a filesystem check on the data partition within the LUKS device by running: e2fsck -f /dev/mapper/somedrive Then resize the partition with: resize2fs /dev/mapper/somedrive Resizing the partition itself only took a couple of minutes. So what I've learned from all this is that, if you don't care to do some reading, Linux disk management via the terminal is incredibly powerful and flexible. I was able to convert a software RAID 1 directly to a software RAID 5, and then resize an encrypted partition on that RAID 5 group, all without losing any data, and with minimal actual downtime. Obviously I updated my backup drive before starting the process, just in case. I'm just incredibly grateful that it all went off without a hitch, which isn't always the case when it comes to technology, so I thought I would share my little learning experience in case it helps or entertains some of you. The info I found about converting the RAID array was here: https://raid.wiki.kernel.org/index.php/A_guide_to_mdadm#Upgrading_a_mirror_raid_to_a_parity_raid And the help I got for resizing the encrypted partition came from "jelly" on the Debian IRC on Freenode.
-
I've tried to plot several drives with Turbo-Plotter, but I keep getting the "can't create file" error, keep in mind I'm a complete noob, so IDK what I'm doing.
-
Ok so im wondering if anyone knows how or which option i should choose from the linux options from Multi MC 5 here is it the ubuntu version or the 64- bit generic tar.gz file to run on Debian Buster?? if its the tar.gz file how do i do it?
- 2 replies
-
- multi mc 5
- debian
-
(and 2 more)
Tagged with: