Jump to content

Script to download from URL

Kalinco

Hi guys,
Up to now, I have been using a small python script to download files from the internet (mainly Dropbox) when given the URL of the file. (see below)
 

def download_file(url, download, desc=None):
if internet_on() == True:
u = urllib2.urlopen(url)

scheme, netloc, path, query, fragment = urlparse.urlsplit(url)
filename = os.path.basename(path)
if not filename:
filename = 'downloaded.file'
if desc:
filename = os.path.join(desc, filename)

with open(filename, 'wb') as f:
meta = u.info()
meta_func = meta.getheaders if hasattr(meta, 'getheaders') else meta.get_all
meta_length = meta_func("Content-Length")
file_size = None
if meta_length:
file_size = int(meta_length[0])
text = "Downloading: " + download
write(text.format(url, file_size),2)

file_size_dl = 0
block_sz = 8192
while True:
buffer = u.read(block_sz)
if not buffer:
break

file_size_dl += len(buffer)
f.write(buffer)

status = "{0:16}".format(file_size_dl)
if file_size:
status += " [{0:6.2f}%]".format(file_size_dl * 100 / file_size)
status += chr(13)
print(status, end="")
print()
return filename
else:
write("Internet Connection Lost, Please Check your Connection And Try Again",1)
time.sleep(5)
exit()


def internet_on():
try:
response = urllib2.urlopen('http://www.google.co.uk',timeout=5)
return True
except urllib2.URLError as err: pass
return False

 

However this method takes a very long time to download files that shouldn't take that long at all. Does anyone have a better, fast method of doing the same thing?

Link to comment
Share on other sites

Link to post
Share on other sites

So why would you want to use this over just downloading a file normally from Dropbox?

My Rigs:

Gaming/CAD/Rendering Rig
Case:
 Corsair Air 240 , CPU: i7-4790K, Mobo: ASUS Gryphon Z97 mATX,  GPU: Gigabyte G1 Gaming GTX 970, RAM: G.Skill Sniper 16GB, SSD: SAMSUNG 1TB 840 EVO, Cooling: Corsair H80i PCPP: https://au.pcpartpicker.com/b/f2TH99SFF HTPC
Case:
Silverstone ML06B, CPU: Pentium G3258, Mobo: Gigabyte GA-H97N-WiFi, RAM: G.Skill 4GB, SSD: Kingston SSDNow 120GB PCPP: https://au.pcpartpicker.com/b/JmZ8TW
Link to comment
Share on other sites

Link to post
Share on other sites

Hi guys,

Up to now, I have been using a small python script to download files from the internet (mainly Dropbox) when given the URL of the file. (see below)

 

def download_file(url, download, desc=None):

if internet_on() == True:

u = urllib2.urlopen(url)

scheme, netloc, path, query, fragment = urlparse.urlsplit(url)

filename = os.path.basename(path)

if not filename:

filename = 'downloaded.file'

if desc:

filename = os.path.join(desc, filename)

with open(filename, 'wb') as f:

meta = u.info()

meta_func = meta.getheaders if hasattr(meta, 'getheaders') else meta.get_all

meta_length = meta_func("Content-Length")

file_size = None

if meta_length:

file_size = int(meta_length[0])

text = "Downloading: " + download

write(text.format(url, file_size),2)

file_size_dl = 0

block_sz = 8192

while True:

buffer = u.read(block_sz)

if not buffer:

break

file_size_dl += len(buffer)

f.write(buffer)

status = "{0:16}".format(file_size_dl)

if file_size:

status += " [{0:6.2f}%]".format(file_size_dl * 100 / file_size)

status += chr(13)

print(status, end="")

print()

return filename

else:

write("Internet Connection Lost, Please Check your Connection And Try Again",1)

time.sleep(5)

exit()

def internet_on():

try:

response = urllib2.urlopen('http://www.google.co.uk',timeout=5)

return True

except urllib2.URLError as err: pass

return False

 

However this method takes a very long time to download files that shouldn't take that long at all. Does anyone have a better, fast method of doing the same thing?

Try typing the URL into one of the many free multistream download managers. I have had good experiences with DAP for almost a decade now.

 

If the problem isn't you, then this might fix it. If the problem is them....well it depends on how they limit their downloads.

Link to comment
Share on other sites

Link to post
Share on other sites

So why would you want to use this over just downloading a file normally from Dropbox?

 

 

Try typing the URL into one of the many free multistream download managers. I have had good experiences with DAP for almost a decade now.

 

If the problem isn't you, then this might fix it. If the problem is them....well it depends on how they limit their downloads.

 

 

I am using this script to automate these downloads so typing into a download manager is not what I am looking for

Link to comment
Share on other sites

Link to post
Share on other sites

Hi guys,

Up to now, I have been using a small python script to download files from the internet (mainly Dropbox) when given the URL of the file. (see below)

 

def download_file(url, download, desc=None):

if internet_on() == True:

u = urllib2.urlopen(url)

scheme, netloc, path, query, fragment = urlparse.urlsplit(url)

filename = os.path.basename(path)

if not filename:

filename = 'downloaded.file'

if desc:

filename = os.path.join(desc, filename)

with open(filename, 'wb') as f:

meta = u.info()

meta_func = meta.getheaders if hasattr(meta, 'getheaders') else meta.get_all

meta_length = meta_func("Content-Length")

file_size = None

if meta_length:

file_size = int(meta_length[0])

text = "Downloading: " + download

write(text.format(url, file_size),2)

file_size_dl = 0

block_sz = 8192

while True:

buffer = u.read(block_sz)

if not buffer:

break

file_size_dl += len(buffer)

f.write(buffer)

status = "{0:16}".format(file_size_dl)

if file_size:

status += " [{0:6.2f}%]".format(file_size_dl * 100 / file_size)

status += chr(13)

print(status, end="")

print()

return filename

else:

write("Internet Connection Lost, Please Check your Connection And Try Again",1)

time.sleep(5)

exit()

def internet_on():

try:

response = urllib2.urlopen('http://www.google.co.uk',timeout=5)

return True

except urllib2.URLError as err: pass

return False

 

However this method takes a very long time to download files that shouldn't take that long at all. Does anyone have a better, fast method of doing the same thing?

If in linux get wget or curl.

PEWDIEPIE DONT CROSS THAT BRIDGE

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

If in linux get wget or curl.

No i'm using Windows

Link to comment
Share on other sites

Link to post
Share on other sites

curl or wget is what you are looking for.

I can't really use 3rd party programs, it needs to be using code that is in the default windows package or that can be converted to run within that environment

Link to comment
Share on other sites

Link to post
Share on other sites

there is a windows equivalent to wget called invoke-webrequest

what program is that in? powershell?

Link to comment
Share on other sites

Link to post
Share on other sites

You can get Wget for windows you know, it's pretty useful for batch scripts and so on.

Comb it with a brick

Link to comment
Share on other sites

Link to post
Share on other sites

curl or wget is what you are looking for.

 

You can get Wget for windows you know, it's pretty useful for batch scripts and so on.

 

What these guys said.

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks everyone for your help

 

I had tried;

urllib.urlretrieve ("http://www.example.com/songs/mp3.mp3", "mp3.mp3")

 

before however it did not work, therefore I was looking for an alternative solution. Then I worked out that the URL I was downloading from had expired so that was the reason it would not work.

 

Thanks anyway!

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×