Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Azgoth 2

Member
  • Content Count

    317
  • Joined

  • Last visited

Awards

This user doesn't have any awards

About Azgoth 2

  • Title
    Member

Recent Profile Visitors

1,287 profile views
  1. Python has a portable version (look for the "embedded ZIP" download), with versions for every OS. The 64-bit Windows version comes in at 12.7MB after unzipping. Pros: Code is very easy to write and maintain. Very robust built-in tools for file and string manipulation; more powerful tools in the standard library's os, sys, and re modules (for general OS interfaces, miscellaneous system/interpreter functionality, and regular expressions, respectively). Excellent documentation. The portable version doesn't seem to be generating any compiled byteco
  2. I'm a big fan of Geany as an editor. It's a pretty light IDE, but it lacks some of the features that bigger programs like Visual Studio or JetBrains' stuff does (granted, they're all features I instantly turn off, so I love it). I've run it on my original Raspberry Pi B+ without issue. It's basically a fancy text editor--syntax highlighting, automatic indentation, code folding, project directory navigation--but it's designed to work with any language you can dream of and lets you specify shell commands to compile and run your program. That last point is a bit of a quirk at first, but it's
  3. Short answer, no, you don't need one. Longer answer, you are unlikely to see much benefit from one unless 1) you've always had a numpad, and thus just never learned to type on the number row, 2) you are constantly entering numeric values. E.g., hard-coding some arrays, manually specifying numeric coefficients in mathematical equations, or doing data-entry type activities. For that, a numpad tends to speed me up a lot, but only when I'm entering many numbers in succession without needing to jump back to the main keyboard area. So if you want a TKL, get a TKL. You can always get
  4. How does OpenMV deal with images? Does it use something like a 2d array of pixel values? If so you could use some of the tools from probably the Numpy or Scipy libraries to clip your pixel values at a certain level (you'd probably need to play with the level a bit to find a good one). Or, if you could use the PIL library: from PIL import Image, ImageEnhance im = Image.Image("/path/to/saved/image") contrast = ImageEnhance.Contrast(im) contrast = contrast.enhance(FACTOR) # set FACTOR > 1 to enhance contrast, < 1 to decrease # either save the image... contrast.save("/path/to/new/loc
  5. Your first command might be looking for the wrong package. On Debian-based distros, at least, the package name for pip is python3-pip--not sure if it's the same in the CentOS repos, but look for something like that. Also make sure you don't have a bash alias that points "pip3" at the Python 2.6 pip--it seems like that might be happening since when you type "pip3 install requests" it says it's looking in the Python 2.6 directories. Also, after installing, run "pip install -U pip" to upgrade pip to the newest version (per the error message)--just to make sure you're using the newes
  6. Libraries for AI: For general data and numeric work: the Scipy stack (numpy, scipy, matplotlib, pandas in particular)--necessary for doing really any work with data (and AI is all about data). Numpy gives you native multidimensional arrays and lots of very fast, efficient operations on those arrays (e.g., dot products, matrix norms, convolutions). Scipy has a lot of general scientific functions (e.g. Fourier analysis, Voronoi tesselations, and function optimization routines), and also sparse matrix formats (for storing large data sets that have a lot of zeros in a me
  7. The single best way, in my experience, to learn how to use any programming language is a project-based approach. Pick or find a thing you want to do--maybe program a game, or a Twitter bot, a generative art maker, or some quality-of-life programs you expect to use often--and then google the hell out of how to do the different parts of it. The fact that you're enrolled in a course right now might give you some natural projects/things to do, but there are a bunch of websites out there that collect programming-oriented problems (Project Euler for computational mathematics; Rosalind for computat
  8. You can use most distros as a server, but you'll commonly see Debian and Slackware used, though they don't have distinct server variants--but they lend themselves very nicely to being configured as servers. As for distros that are specifically designed for server use, Ubuntu Server, Red Hat Enterprise Linux, and SUSE Enterprise Linux all have a corprorate support infrastructure behind them, which makes them appealing to a lot of businesses. CentOS is a community driven fork of Red Hat that's very popular, too.
  9. Since you're interested in GPU-accelerated math and neural nets, as mentioned, you won't be able to get anything serious done on a Pi due to its very low specs (at least in terms of building large or complex models), but you can get started with the basics. That said: for straight GPU-accelerated math, look into Tensorflow and Theano. They're both great libraries for GPU math, each with pros and cons that you'll want to read up on a bit. In short, though: Theano is older and more mature, but Tensorflow is developed by Google and is rapidly taking over other GPU-accelerated math libraries/fr
  10. Your options are extremely limited given those specs. Tiny Core has already been mentioned, but I'll second it--though be warned it comes with basically nothing installed--not even a lot of the command line tools that you get in other distros. Debian with a minimal install (look for the "network install" .iso image) and no GUI might work, as might a minimalist Arch install (as in, one where you don't put much stuff on it). Non-linux OSs that should work would include FreeDOS, a free and open source implementation of MSDOS, and Kolibri OS, an operating sytem that's written entirely in Assemb
  11. As long as you don't have multiple matches per line to worry about that should work--it'll print each match to a new line in stdout, based on my quick tests. Admittedly I don't use awk/gawk much so I wasn't aware it had issues with multiple matches per line until I looked it up just now. Frankly I'm starting to lean towards just writing a script in Python or whatever language you're comfortable with to do the matching for you: #!/usr/bin/python3 import re f = open("/path/to/file", "r").read() for i in re.findall('https://www\.twitch\.tv/videos/(.*?)",', f): print(i) # or for multiple
  12. Ah, i see what's happening. I was testing this on a single random twitch.tv video URL--in your text it's replacing the url with just what comes after the /video/ part. Sed is really meant for manipulating text--for just matching substrings, you'll want (g)awk. gawk 'match($0, /https:\/\/www\.twitch\.tv\/videos\/([^\"]*)\",/, arr) {print arr[1]}' output Regular expressions in (g)awk are surrounded by / characters, s there's a lot of ugly escaping. Quotation marks are also escaped because they normally represent literal string delimiters.
  13. Use the -E flag to use extended regexp syntax, and get rid of the backslashes around your parentheses. Then use special \1 escape character in place of a replacement string to print out the found match for the capturing parentheses. (\1-\9 refer to matched sub-strings; with capturing parentheses they can refer to what the parentheses found) sed -E 's_https://www.twitch.tv/videos/(.*)"_\1_'
  14. After some quick testing with awk: awk 'match($0, /*([0-9]+)~/, res) { print res[1] }' file1 file2 ... Where: match($0, /*([0-9]+)~/, res) matches the regular expression *([0-9]+)~ (regular expressions are enclosed by // in awk) to $0 (i.e., stdin) and saves the result to res. { print res[1] } prints the first item of the (0-indexed) result array. Since there were capturing parentheses in the regular expression, this is what was inside the parentheses. (The 0th item in the array for this bit of code is the full match, as if there weren't capturing parentheses there).
  15. Okay. That still doesn't quite answer the other question, which is what your needs are in terms of precision and recall. E.g., if you need to get every contract related to IT and no false positives, then topic modeling may not be the best tool (for that, you'd need metadata on the documents--probably hand-annotated). But if you just need to get a pretty good number of topics related to IT and some false positives are okay, then it might be a good option. Topic modeling does not require word lists. TM algorithms are designed to learn what the topics are, specifically to avoid th
×