Jump to content

Symbolls

Member
  • Posts

    84
  • Joined

  • Last visited

Everything posted by Symbolls

  1. So i have wanted to do a project for a while now but i dont know were to start since i have never trained a newral network before and this task is quite complex and if you culd point me in to the right direction for recorcese like turials or examples etc. that is what i ask since i cant find anything about this topic on google I have programed before but mostly in robotics not anything fancy and now i want to create a "AI" imige sort of generator that is similar to the cartoonish or animefy filters you find in apps or on the enternet were you upload a photo or video and generates a counterpart imige that is similar just goffy or cartuniefied. i am looking for ways to take motion data and imige data and use that to make a video to cartoon style ai video generator and i know some of you may think its of the limts for a novice at programing such as mysalf for such a imense task i asure you i understand the theoretical working of all subjects nedded and i am eager to learn how to do it since its a passion for me to create this. the schematic for something like this is gona look like this Motion kinematic data --| | _________________________________ |------->| | | Newral netwwork framework| ---> newly ganerated frame that has ben cartoonyfied |---------> |________________________________| | randerd 3d figure ----| video or imige I am not looking for straight code (if you have i would be more then happy for it) just for guidance on how to start coding this
  2. now that i think about it it cant work bechose i dont have a tird hard drive to install windows on so i can recover the other so is there eny other way i can get the data out of there
  3. i have an even better idea if i may since i do beleve i have some stuff on my dektop that was important asuming no encription can i plug it into another windows macine each hard drive and extract the important bits and will it be plug and play?? and also the article showd on windows 10 i am on 11 will it still work?
  4. i have an ssd and a hard drive the important stuff is mostly on the hardrive if i reinstall windows and i fizicaly disconect the hardrive and then after instalation/setup i plug it back into the sata port can i accses my files? and i dont know of eany encription that is now done by windows i have never used and never plan to use mcaffe or other servicese i used bone stock windows
  5. i created my microsoft account years ago and now i changed something in the bios bechose valorant was not recognisimg my tpm and when i went back to windows my pin was no longer good for some retarted resone and my phone number is wrong one 0 wrong and after i tryed to change it the screen sed it will take 30 day 30 DAAAAYS to change my phone number and bechose the phone number was wrong i cannot login into my computer for 30 DAYS just bechose microsoft decided to and now i am at a loss since microsoft dose not have real humans and whatever i tryed i never got a hold of A FDINGIKSBNS HUMAAAN i am so mad right now and desprate since i need my computer for uni and now i cant acces it for 30 days in the exam period i am going mad
  6. hello i want need to do some assigment at uni i forgot what the teacher was taling us and i need to do a few assigments pls help FIrst i need to do this code without using for loops i sortof know how for loops work but i dont know about do while loops and i thot they ware straigth forward but not so mutch so here is what i have this is the code that works with the for loops import java.util.Scanner; public class Exercise { public static void main(String[] args) { //TODO: put your code here Scanner scanner = new Scanner(System.in); int number = scanner.nextInt(); if(number<0){ System.out.println(); }else{ for(int i = 1;i<=number;i++){ System.out.print(i); System.out.print(' '); } System.out.println(); for(int i=-number;i<=number;i++){ System.out.print(i); System.out.print(' '); } } } }
  7. I dont know what to do i need to make this exercise for univarcity and the teacher did not tell us how to do them and i need help plese import java.util.Scanner; public class Exercise { public static void main(String[] args){ Scanner input = new Scanner(System.in); boolean isCold = input.nextBoolean(); //Is it cold? [true/false] boolean isDry = input.nextBoolean(); //Is it dry? [true/false] boolean isHard = isDry ^ isCold; if (isCold == isDry){ System.out.print("It is dry\n" + "It is hot or dry"); }else{ if (!isCold) { System.out.print("It is hot\n" + "It is dry\n" + "It is hot or dry\n" + "It is hot and dry"); } if (!isHard){ System.out.print("It is hot\n" + "It is hot or dry"); } } /* if (isDry^isCold){ System.out.print("It is hot\n" + "It is dry\n" + "It is hot or dry\n" + "It is hot and dry"); }*/ } }
  8. then what can i do so the sensor reading part works
  9. So i made an line folower and the code is all fine and good utill i aded a IR recever so i can turn it on or off at will and fore some resone the code that acualy follows the line dose not want to loop or react after the state is ON so i am not a good programer i am an engenier so it might be something dumb so sorry if thats the case. Here is the code #include <Arduino.h> #if FLASHEND <= 0x1FFF // For 8k flash or less, like ATtiny85. Exclude exotic protocols. #define EXCLUDE_UNIVERSAL_PROTOCOLS // Saves up to 1000 bytes program space. #define EXCLUDE_EXOTIC_PROTOCOLS #endif /* * Define macros for input and output pin etc. */ //#include "PinDefinitionsAndMore.h" #include <IRremote.hpp> #if defined(APPLICATION_PIN) #define RELAY_PIN APPLICATION_PIN #else #define RELAY_PIN 5 #endif // Arduino Line Follower Robot Code #include <Servo.h> Servo myservo1; Servo myservo2; #define R_S 2//ir sensor Right #define L_S 4 //ir sensor Left void setup(){ pinMode(LED_BUILTIN, OUTPUT); pinMode(RELAY_PIN, OUTPUT); Serial.begin(115200); #if defined(__AVR_ATmega32U4__) || defined(SERIAL_PORT_USBVIRTUAL) || defined(SERIAL_USB) || defined(SERIALUSB_PID) || defined(ARDUINO_attiny3217) delay(4000); // To be able to connect Serial monitor after reset or power up and before first print out. Do not wait for an attached Serial Monitor! #endif // Just to know which program is running on my Arduino Serial.println(F("START " __FILE__ " from " __DATE__ "\r\nUsing library version " VERSION_IRREMOTE)); // Start the receiver and if not 3. parameter specified, take LED_BUILTIN pin from the internal boards definition as default feedback LED IrReceiver.begin(10, ENABLE_LED_FEEDBACK); Serial.print(F("Ready to receive IR signals at pin ")); Serial.println(10); int on = 0; unsigned long last = millis(); myservo1.attach(9); myservo2.attach(8); pinMode(R_S, INPUT); pinMode(L_S, INPUT); delay(10); myservo2.write(90); delay(1000); } int on = 0; unsigned long last = millis(); void loop(){ if (IrReceiver.decode()) { // If it's been at least 1/4 second since the last // IR received, toggle the relay if (millis() - last > 250) { on = !on; Serial.print(F("Switch relay ")); if (on) { if((digitalRead(R_S) == 1)&&(digitalRead(L_S) == 1)){ Serial.print("Forword"); myservo1.write(94); myservo2.write(100); } //if Right Sensor and Left Sensor are at White color then it will call forword function if((digitalRead(R_S) == 0)&&(digitalRead(L_S) == 1)){ Serial.print("Right"); myservo1.write(113); myservo2.write(95); } //if Right Sensor is Black and Left Sensor is White then it will call turn Right function if((digitalRead(R_S) == 1)&&(digitalRead(L_S) == 0)){ Serial.print("Left"); myservo1.write(75); myservo2.write(95); } //if Right Sensor is White and Left Sensor is Black then it will call turn Left function if((digitalRead(R_S) == 0)&&(digitalRead(L_S) == 0)){ Serial.print("Stop"); myservo1.write(94); myservo2.write(90); } //if Right Sensor and Left Sensor are at Black color then it will call Stop function } digitalWrite(RELAY_PIN, HIGH); Serial.println(F("on")); } else { myservo1.write(94); myservo2.write(90); digitalWrite(RELAY_PIN, LOW); Serial.println(F("off")); } #if FLASHEND >= 0x3FFF // For 16k flash or more, like ATtiny1604 IrReceiver.printIRResultShort(&Serial); Serial.println(); if (IrReceiver.decodedIRData.protocol == UNKNOWN) { // We have an unknown protocol, print more info IrReceiver.printIRResultRawFormatted(&Serial, true); } #else // Print a minimal summary of received data IrReceiver.printIRResultMinimal(&Serial); Serial.println(); #endif // FLASHEND last = millis(); IrReceiver.resume(); // Enable receiving of the next value } }
  10. Hello i am working on a project and i am not good at programing and i want when i press the button on my ps4 controller the value of a variable to increse witch will add 10 degreze to the servomotro angle thx in advance i am sorry if my code is mess i am a noob when it comes to programing from adafruit_servokit import ServoKit import time from pyPS4Controller.controller import Controller kit = ServoKit(channels=16) kit.servo[0].actuation_range = 180 class MyController(Controller): angle_count = 0 def __init__(self, angle_count): self.angle_count = angle_count angle_count.person_count += 1 # here def __init__(self, **kwargs): Controller.__init__(self, **kwargs) def on_x_press(self, angle_count): print("1") angle_count + 1 print(angle_count) #Angle.angle_count += 1 #print(Angle.angle_count) #kit.servo[0].angle = 180 # def on_x_release(self): def on_circle_press(self): print("2") #print(Angle.angle_count) #kit.servo[0].angle = 0 controller = MyController(interface="/dev/input/js0", connecting_using_ds4drv=False) controller.listen(timeout=60) Error: Waiting for interface: /dev/input/js0 to become available . . . Successfully bound to: /dev/input/js0. Traceback (most recent call last): File "/home/symbolls/Desktop/Code Project RAmk2/Ver1.py", line 37, in <module> controller.listen(timeout=60) File "/home/symbolls/Desktop/Code Project RAmk2/lib/python3.6/site-packages/pyPS4Controller/controller.py", line 263, in listen debug=self.debug) File "/home/symbolls/Desktop/Code Project RAmk2/lib/python3.6/site-packages/pyPS4Controller/controller.py", line 319, in __handle_event self.on_x_press() TypeError: on_x_press() missing 1 required positional argument: 'angle_count' Exiting... Cleaning up pins Process finished with exit code 1
  11. i dont know its not my original code so idk i was falowing a tutorial but the guy in the video got it to work but to me after i copyd the code it wasant
  12. so the code has this error Traceback (most recent call last): File "D:\Opencv tests\HandVolume.py", line 31, in <module> img = detector.findHands(0,img) File "D:\Opencv tests\HandTrackingModule.py", line 21, in findHands self.results = self.hands.process(imgRGB) AttributeError: 'int' object has no attribute 'hands' [ WARN:0] global D:\a\opencv-python\opencv-python\opencv\modules\videoio\src\cap_msmf.cpp (438) `anonymous-namespace'::SourceReaderCB::~SourceReaderCB terminating async callback idk i have tryd playng arround with the code trying to fix it but it dont work so i was hoping someoane will help me i dont know what to write to fix it the code is in the latest version of python and i am using pycharm and py charm dosent know whats wrong the code has 2 part the main part and a secondary part that is a module for the first the files are down below but i will leave the code visible Thx for your thime in advace import cv2 import time import numpy as np import HandTrackingModule as htm import math from ctypes import cast, POINTER from comtypes import CLSCTX_ALL from pycaw.pycaw import AudioUtilities, IAudioEndpointVolume ################################ wCam, hCam = 640, 480 ################################ cap = cv2.VideoCapture(0) cap.set(3, wCam) cap.set(4, hCam) pTime = 0 detector = htm.handDetector#(detectionCon = 0.7) devices = AudioUtilities.GetSpeakers() interface = devices.Activate( IAudioEndpointVolume._iid_, CLSCTX_ALL, None) volume = cast(interface, POINTER(IAudioEndpointVolume)) # volume.GetMute() # volume.GetMasterVolumeLevel() volRange = volume.GetVolumeRange() minVol = volRange[0] maxVol = volRange[1] vol = 0 volBar = 400 volPer = 0 while True: success, img = cap.read() img = detector.findHands(0,img) lmList = detector.findPosition(img, draw=False) if len(lmList) != 0: # print(lmList[4], lmList[8]) x1, y1 = lmList[4][1], lmList[4][2] x2, y2 = lmList[8][1], lmList[8][2] cx, cy = (x1 + x2) // 2, (y1 + y2) // 2 cv2.circle(img, (x1, y1), 15, (255, 0, 255), cv2.FILLED) cv2.circle(img, (x2, y2), 15, (255, 0, 255), cv2.FILLED) cv2.line(img, (x1, y1), (x2, y2), (255, 0, 255), 3) cv2.circle(img, (cx, cy), 15, (255, 0, 255), cv2.FILLED) length = math.hypot(x2 - x1, y2 - y1) print(length) # Hand range 50 - 300 # Volume Range -65 - 0 vol = np.interp(length, [50, 300], [minVol, maxVol]) volBar = np.interp(length, [50, 300], [400, 150]) volPer = np.interp(length, [50, 300], [0, 100]) print(int(length), vol) volume.SetMasterVolumeLevel(vol, None) if length < 50: cv2.circle(img, (cx, cy), 15, (0, 255, 0), cv2.FILLED) cv2.rectangle(img, (50, 150), (85, 400), (255, 0, 0), 3) cv2.rectangle(img, (50, int(volBar)), (85, 400), (255, 0, 0), cv2.FILLED) cv2.putText(img, f'{int(volPer)} %', (40, 450), cv2.FONT_HERSHEY_COMPLEX, 1, (255, 0, 0), 3) cTime = time.time() fps = 1 / (cTime - pTime) pTime = cTime cv2.putText(img, f'FPS: {int(fps)}', (40, 50), cv2.FONT_HERSHEY_COMPLEX, 1, (255, 0, 0), 3) cv2.imshow("Img", img) cv2.waitKey(1) import cv2 import mediapipe as mp import time import math class handDetector(): def __init__(self, mode=False, maxHands=2, detectionCon=0.5, trackCon=0.5): self.mode = mode self.maxHands = maxHands self.detectionCon = detectionCon self.trackCon = trackCon self.mpHands = mp.solutions.hands self.hands = self.mpHands.Hands(self.mode, self.maxHands, self.detectionCon, self.trackCon) self.mpDraw = mp.solutions.drawing_utils self.tipIds = [4, 8, 12, 16, 20] def findHands(self, img, draw=True): imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) self.results = self.hands.process(imgRGB) print(self.results) if self.results.multi_hand_landmarks: for handLms in self.results.multi_hand_landmarks: if draw: self.mpDraw.draw_landmarks(img, handLms, self.mpHands.HAND_CONNECTIONS) return img def findPosition(self, img, handNo=0, draw=True): xList = [] yList = [] bbox = [] self.lmList = [] if self.results.multi_hand_landmarks: myHand = self.results.multi_hand_landmarks[handNo] for id, lm in enumerate(myHand.landmark): # print(id, lm) h, w, c = img.shape cx, cy = int(lm.x * w), int(lm.y * h) xList.append(cx) yList.append(cy) # print(id, cx, cy) self.lmList.append([id, cx, cy]) if draw: cv2.circle(img, (cx, cy), 5, (255, 0, 255), cv2.FILLED) xmin, xmax = min(xList), max(xList) ymin, ymax = min(yList), max(yList) bbox = xmin, ymin, xmax, ymax if draw: cv2.rectangle(img, (bbox(0), -20, bbox(1), -20), (bbox[2] + 20, bbox[3] + 20), (0, 255, 0), 2) return self.lmList, bbox def fingersUp(self): fingers = [] # Thumb if self.lmList[self.tipIds[0]][1] > self.lmList[self.tipIds[0]][1]: fingers.append(1) else: fingers.append(0) # 4 Fingers for id in range(1, 5): if self.lmList[self.tipIds[id]][2] < self.lmList[self.tipIds[id]][2]: fingers.append(1) else: fingers.append(0) return fingers def findDistance(self, p1, p2, img, draw=True): x1, y1 = self.lmList[p1][1], self.lmList[p1][2] x2, y2 = self.lmList[p2][1], self.lmList[p2][2] cx, cy = (x1 + x2) // 2, (y1 + y2) // 2 if draw: cv2.circle(img, (x1, y1), 15, (255, 0, 255), cv2.FILLED) cv2.circle(img, (x2, y2), 15, (255, 0, 255), cv2.FILLED) cv2.line(img, (x1, y1), (x2, y2), (255, 0, 255), 3) cv2.circle(img, (cx, cy), 15, (255, 0, 255), cv2.FILLED) length = math.hypot(x2 - x1, y2 - y1) return length, img, [x1, y1, x2, y2, cx, cy] def main(): pTime = 0 cap = cv2.VideoCapture(0) detector = handDetector() while True: success, img = cap.read() img = detector.findHands(img) lmList = detector.findPosition(img) if len(lmList) != 0: print(lmList[4]) cTime = time.time() fps = 1 / (cTime - pTime) pTime = cTime cv2.putText(img, str(int(fps)), (10, 70), cv2.FONT_HERSHEY_PLAIN, 3, (255, 0, 255), 3) cv2.imshow("Image", img) cv2.waitKey(1) if __name__ == "__main__": main() HandTrackingModule.py HandVolume.py
  13. its the same code that is posted i tryd several times to paste and repaste but it dosent want to run
  14. i am using windows pycharm and the code is full of error like the 60% of the lines are red and i wanted to do this cool trick from the course the u can use open cv to change the volume of the computer and the code is bugged from the cource site but i like the ide so mutch i want to get it working somehow
  15. I have been trying some OpenCV code I got from https://www.computervision.zone/lessons/code-files-12/ and when I put it into PycCarm it has a lot of errors and I don't know how to fix them. it conssist in 2 modules a hand traking modulke and a main module. import cv2 import mediapipe as mp import time import math class handDetector(): def __init__(self, mode=False, maxHands=2, detectionCon=0.5, trackCon=0.5): self.mode = mode self.maxHands = maxHands self.detectionCon = detectionCon self.trackCon = trackCon self.mpHands = mp.solutions.hands self.hands = self.mpHands.Hands(self.mode, self.maxHands, self.detectionCon, self.trackCon) self.mpDraw = mp.solutions.drawing_utils self.tipIds = [4, 8, 12, 16, 20] def findHands(self, img, draw=True): imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) self.results = self.hands.process(imgRGB) # print(results.multi_hand_landmarks) if self.results.multi_hand_landmarks: for handLms in self.results.multi_hand_landmarks: if draw: self.mpDraw.draw_landmarks(img, handLms,self.mpHands.HAND_CONNECTIONS) return img def findPosition(self, img, handNo=0, draw=True): xList = [] yList = [] bbox = [] self.lmList = [] if self.results.multi_hand_landmarks: myHand = self.results.multi_hand_landmarks[handNo] for id, lm in enumerate(myHand.landmark): #print(id, lm) h, w, c = img.shape cx, cy = int(lm.x * w), int(lm.y * h) xList.append(cx) yList.append(cy) # print(id, cx, cy) self.lmList.append([id, cx, cy]) if draw: cv2.circle(img, (cx, cy), 5, (255, 0, 255), cv2.FILLED) xmin, xmax = min(xList), max(xList) ymin, ymax = min(yList), max(yList) bbox = xmin, ymin, xmax, ymax if draw: cv2.rectangle(img, (bbox[0] – 20, bbox[1] – 20), (bbox[2] + 20, bbox[3] + 20), (0, 255, 0), 2) return self.lmList, bbox def fingersUp(self): fingers = [] # Thumb if self.lmList[self.tipIds[0]][1] > self.lmList[self.tipIds[0] – 1][1]: fingers.append(1) else: fingers.append(0) # 4 Fingers for id in range(1, 5): if self.lmList[self.tipIds[id]][2] < self.lmList[self.tipIds[id] – 2][2]: fingers.append(1) else: fingers.append(0) return fingers def findDistance(self, p1, p2, img, draw=True): x1, y1 = self.lmList[p1][1], self.lmList[p1][2] x2, y2 = self.lmList[p2][1], self.lmList[p2][2] cx, cy = (x1 + x2) // 2, (y1 + y2) // 2 if draw: cv2.circle(img, (x1, y1), 15, (255, 0, 255), cv2.FILLED) cv2.circle(img, (x2, y2), 15, (255, 0, 255), cv2.FILLED) cv2.line(img, (x1, y1), (x2, y2), (255, 0, 255), 3) cv2.circle(img, (cx, cy), 15, (255, 0, 255), cv2.FILLED) length = math.hypot(x2 – x1, y2 – y1) return length, img, [x1, y1, x2, y2, cx, cy] def main(): pTime = 0 cap = cv2.VideoCapture(1) detector = handDetector() while True: success, img = cap.read() img = detector.findHands(img) lmList = detector.findPosition(img) if len(lmList) != 0: print(lmList[4]) cTime = time.time() fps = 1 / (cTime – pTime) pTime = cTime cv2.putText(img, str(int(fps)), (10, 70), cv2.FONT_HERSHEY_PLAIN, 3, (255, 0, 255), 3) cv2.imshow(“Image”, img) cv2.waitKey(1) if __name__ == “__main__”: main() import cv2 import time import numpy as np import HandTrackingModule as htm import math from ctypes import cast, POINTER from comtypes import CLSCTX_ALL from pycaw.pycaw import AudioUtilities, IAudioEndpointVolume ################################ wCam, hCam = 640, 480 ################################ cap = cv2.VideoCapture(1) cap.set(3, wCam) cap.set(4, hCam) pTime = 0 detector = htm.handDetector(detectionCon=0.7) devices = AudioUtilities.GetSpeakers() interface = devices.Activate( IAudioEndpointVolume._iid_, CLSCTX_ALL, None) volume = cast(interface, POINTER(IAudioEndpointVolume)) # volume.GetMute() # volume.GetMasterVolumeLevel() volRange = volume.GetVolumeRange() minVol = volRange[0] maxVol = volRange[1] vol = 0 volBar = 400 volPer = 0 while True: success, img = cap.read() img = detector.findHands(img) lmList = detector.findPosition(img, draw=False) if len(lmList) != 0: # print(lmList[4], lmList[8]) x1, y1 = lmList[4][1], lmList[4][2] x2, y2 = lmList[8][1], lmList[8][2] cx, cy = (x1 + x2) // 2, (y1 + y2) // 2 cv2.circle(img, (x1, y1), 15, (255, 0, 255), cv2.FILLED) cv2.circle(img, (x2, y2), 15, (255, 0, 255), cv2.FILLED) cv2.line(img, (x1, y1), (x2, y2), (255, 0, 255), 3) cv2.circle(img, (cx, cy), 15, (255, 0, 255), cv2.FILLED) length = math.hypot(x2 - x1, y2 - y1) # print(length) # Hand range 50 - 300 # Volume Range -65 - 0 vol = np.interp(length, [50, 300], [minVol, maxVol]) volBar = np.interp(length, [50, 300], [400, 150]) volPer = np.interp(length, [50, 300], [0, 100]) print(int(length), vol) volume.SetMasterVolumeLevel(vol, None) if length < 50: cv2.circle(img, (cx, cy), 15, (0, 255, 0), cv2.FILLED) cv2.rectangle(img, (50, 150), (85, 400), (255, 0, 0), 3) cv2.rectangle(img, (50, int(volBar)), (85, 400), (255, 0, 0), cv2.FILLED) cv2.putText(img, f'{int(volPer)} %', (40, 450), cv2.FONT_HERSHEY_COMPLEX, 1, (255, 0, 0), 3) cTime = time.time() fps = 1 / (cTime - pTime) pTime = cTime cv2.putText(img, f'FPS: {int(fps)}', (40, 50), cv2.FONT_HERSHEY_COMPLEX, 1, (255, 0, 0), 3) cv2.imshow("Img", img) cv2.waitKey(1)
  16. Hi and sorry if this is not the correct category for this but i dont know were else to put this problem So i wana do a wierd extrude in solid that is not equal and i dont know how to extrude it the grey part is the zone but as u can see it is very unusual i have the solid file in STEP and PART format if enybody can help me thx for your time reading :)) Base.STEP Base.SLDPRT
  17. so i want to send data like sensor data over local network and i have been sucsesful to sending a text but i cant send ot continusly Pc code (master): # Echo server program import socket import time #HOST = '' # Symbolic name meaning all available interfaces PORT = 50007 # Arbitrary non-privileged port with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.bind(('', PORT)) s.listen(1) conn, addr = s.accept() with conn: print('Connected by', addr) while True: data = conn.recv(1024) print(data) #time.sleep(1) #conn.sendall(bytes("Hello Slave!!", encoding='utf8')) conn.sendall(b'Hello Slave!!') #if not data: break #conn.sendall(data) Raspi code(slave): # Echo client program import time import socket HOST = 'Ip of pc' # The remote host PORT = 50007 # The same port as used by the server with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.connect((HOST, PORT)) data = s.recv(1024) while True: s.sendall(b'Hello, world') print('Received', repr(data)) #time.sleep(1)
  18. So i have been playng arround with coputer vision a lot and i have found ways to make raspberry pi faster at traking but its not that fast still and i want to proces the coputer vison part on a coputers gpu over wofo but i cabt seame to find a way to do it or i am just googling wrong so if sombady has some tutorial on how to do it or somwat close (most importantly the data transmision and reciving part) pls let me now thx for reding.
  19. So i want to make an program or AI to learn my voice and a nother guys voice and then reproduce what i am seying live with his voice and idk were to start i have programed in python before but i am not that advanced so idk I now what i want to do with is take 2 perfectly match mp3 clips of piple reding a text and then one of the persons can speack bormaly in the microfon and the output will be the other guy / girl voice so i loockt at tenserflow and got overwelt of the 7-9 hour tutorials so if enybody nose somthing that alredy exist that dose that or nows some more concentrated totorial for ai for begeners pls.let me know thx for reding
  20. thx but id dose not work i will try a work around were i literely rip the cabels of the rgb led and connect them to the motherbord
  21. i must be an idiot i now but i dont see what tab or enywere were is ses windows binares
×