Jump to content

Anyone good with pytorch and stuff? Will pay u a coffee

Nicnac

If someone can get this colab file to run again I would be very grateful!

https://colab.research.google.com/github/jantic/DeOldify/blob/master/DeOldify_colab.ipynb

 

I am assuming it just needs another pytoch version specified but I am an absolute noob.

Only asking here b/c I don't know where else I can ask...

 

Folding stats

Vigilo Confido

 

Link to comment
Share on other sites

Link to post
Share on other sites

http://download.pytorch.org/whl/{accelerator}/torch-1.0.0-{platform}-linux_x86_64.whl

Replace the url on the sixth line with this one.

Used the second answer from this page to correct the versioning issue.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, imbrock said:

http://download.pytorch.org/whl/{accelerator}/torch-1.0.0-{platform}-linux_x86_64.whl

Replace the url on the sixth line with this one.

Used the second answer from this page to correct the versioning issue.

thank you so much! unfortunately, now the VM runs out of ram when downloading the pre-trained weights :/ this didn't happen before. Anything I can do about it?

Folding stats

Vigilo Confido

 

Link to comment
Share on other sites

Link to post
Share on other sites

#The higher the render_factor, the more GPU memory will be used and generally images will look better.  
#11GB can take a factor of 42 max.  Performance generally gracefully degrades with lower factors, 
#though you may also find that certain images will actually render better at lower numbers.  
#This tends to be the case with the oldest photos.
render_factor=42

What sort of video card do you have / how much v ram does it have? Just need to drop the 42 down to something that fits your GPU

EDIT: Saw the 1050 in your New Rig post. Maybe try 12 passes or something.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, imbrock said:

#The higher the render_factor, the more GPU memory will be used and generally images will look better.  
#11GB can take a factor of 42 max.  Performance generally gracefully degrades with lower factors, 
#though you may also find that certain images will actually render better at lower numbers.  
#This tends to be the case with the oldest photos.
render_factor=42

What sort of video card do you have / how much v ram does it have? Just need to drop the 42 down to something that fits your GPU

EDIT: Saw the 1050 in your New Rig post. Maybe try 12 passes or something.

No, I mean I am running it in colab on the google server which has a Tesla P4 and 12 gigs of ram... but now the session crashes and says it runs out of ram although the weights haven't changed and it worked fine previously...

Folding stats

Vigilo Confido

 

Link to comment
Share on other sites

Link to post
Share on other sites

Ahh yes the code lab server, I'm derped this morning.

I just updated

accelerator = 'cu80'

to

accelerator = 'cu90'

Trying it now to see if it fixes it. If it doesn't I'm going to try dropping the render factor just a little bit. The new versions of things could be eating up more memory than before.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, imbrock said:

Ahh yes the code lab server, I'm derped this morning.

I just updated


accelerator = 'cu80'

to

accelerator = 'cu90'

Trying it now to see if it fixes it. If it doesn't I'm going to try dropping the render factor just a little bit. The new versions of things could be eating up more memory than before.

thanks so much for trying this out with me :3

Folding stats

Vigilo Confido

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Nicnac said:

thanks so much for trying this out with me :3

Youre welcome, its fun I havent coded in a while and I've been trying to ease my way back in this year.

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, Nicnac said:

thanks so much for trying this out with me :3

Looks like it crashes out trying to do a colourize with version 90 of cuda, I think they changed a call name or something. I'm testing it out back at cu80 with fewer passes first then I'm going to look into bumping up the cu version

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, imbrock said:

Looks like it crashes out trying to do a colourize with version 90 of cuda, I think they changed a call name or something. I'm testing it out back at cu80 with fewer passes first then I'm going to look into bumping up the cu version

I am testing a factor of 40 right now which one are you using?

Folding stats

Vigilo Confido

 

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, Nicnac said:

I am testing a factor of 40 right now which one are you using?

I put it through with  32 and it went through fine but had the same system call error. Which means its fine at factor 42 and cu90 apparently theres something weird with cu80 and 90 and newer gpus. I'm trying cu100 now which should apparently work better, fingers crossed.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, imbrock said:

I put it through with  32 and it went through fine but had the same system call error. Which means its fine at factor 42 and cu90 apparently theres something weird with cu80 and 90 and newer gpus. I'm trying cu100 now which should apparently work better, fingers crossed.

hm i just tried 30 and it ran out of memory again :/

Folding stats

Vigilo Confido

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Nicnac said:

hm i just tried 30 and it ran out of memory again :/

Thats for the your own photos coming through google drive right? What resolution and dpi are your photos there? If they're quite large I could see that affecting things. I've got one 2000x3000(ish) 96 dpi photo going through that now at 42 and it seems alright. Though I'm still waiting on it to finish the current process.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, imbrock said:

Thats for the your own photos coming through google drive right? What resolution and dpi are your photos there? If they're quite large I could see that affecting things. I've got one 2000x3000(ish) 96 dpi photo going through that now at 42 and it seems alright. Though I'm still waiting on it to finish the current process.

no, for the facot i set at this step:

weights_path = 'pretrained_weights.h5'
results_dir='/content/drive/My Drive/deOldifyImages/results'

#The higher the render_factor, the more GPU memory will be used and generally images will look better.  
#11GB can take a factor of 42 max.  Performance generally gracefully degrades with lower factors, 
#though you may also find that certain images will actually render better at lower numbers.  
#This tends to be the case with the oldest photos.
render_factor=20
filters = [Colorizer34(gpu=0, weights_path=weights_path)]
vis = ModelImageVisualizer(filters, render_factor=render_factor, results_dir=results_dir)

I haven't even tried with the example pictures because it keeps crashing after that and runs out of memory

The pictures I want to use are all of very different res but mostly very small.

Folding stats

Vigilo Confido

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Nicnac said:

no, for the facot i set at this step:


weights_path = 'pretrained_weights.h5'
results_dir='/content/drive/My Drive/deOldifyImages/results'

#The higher the render_factor, the more GPU memory will be used and generally images will look better.  
#11GB can take a factor of 42 max.  Performance generally gracefully degrades with lower factors, 
#though you may also find that certain images will actually render better at lower numbers.  
#This tends to be the case with the oldest photos.
render_factor=20
filters = [Colorizer34(gpu=0, weights_path=weights_path)]
vis = ModelImageVisualizer(filters, render_factor=render_factor, results_dir=results_dir)

I haven't even tried with the example pictures because it keeps crashing after that and runs out of memory

The pictures I want to use are all of very different res but mostly very small.

Thats weird, this is my top cell with the modifications in it

from os import path
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())

accelerator = 'cu100' if path.exists('/opt/bin/nvidia-smi') else 'cpu'

!pip install -q http://download.pytorch.org/whl/{accelerator}/torch-1.0.0-{platform}-linux_x86_64.whl torchvision
import torch
print(torch.__version__)
print(torch.cuda.is_available())

when it runs it returns

1.0.0
True

everything else just ran through clean for me for the first time.
 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, imbrock said:

Thats weird, this is my top cell with the modifications in it


from os import path
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())

accelerator = 'cu100' if path.exists('/opt/bin/nvidia-smi') else 'cpu'

!pip install -q http://download.pytorch.org/whl/{accelerator}/torch-1.0.0-{platform}-linux_x86_64.whl torchvision
import torch
print(torch.__version__)
print(torch.cuda.is_available())

when it runs it returns


1.0.0
True

everything else just ran through clean for me for the first time.
 

oh youre talking about the code further up... imma try cu100

Folding stats

Vigilo Confido

 

Link to comment
Share on other sites

Link to post
Share on other sites

59 minutes ago, imbrock said:

thanks so much!

however when I try to do my own pictures it gives me this

---------------------------------------------------------------------------

RuntimeError                              Traceback (most recent call last)

<ipython-input-22-f789f3d10b81> in <module>()
      2   img_path = str("/content/drive/My Drive/deOldifyImages/") + img
      3   if os.path.isfile(img_path):
----> 4     vis.plot_transformed_image(img_path)

/content/DeOldify/fasterai/visualize.py in plot_transformed_image(self, path, figsize, render_factor)
     28     def plot_transformed_image(self, path:str, figsize:(int,int)=(20,20), render_factor:int=None)->ndarray:
     29         path = Path(path)
---> 30         result = self._get_transformed_image_ndarray(path, render_factor)
     31         orig = open_image(str(path))
     32         fig,axes = plt.subplots(1, 2, figsize=figsize)

/content/DeOldify/fasterai/visualize.py in _get_transformed_image_ndarray(self, path, render_factor)
     52 
     53         for filt in self.filters:
---> 54             filtered_image = filt.filter(orig_image, filtered_image, render_factor=render_factor)
     55 
     56         return filtered_image

/content/DeOldify/fasterai/filters.py in filter(self, orig_image, filtered_image, render_factor)
     93     def filter(self, orig_image:ndarray, filtered_image:ndarray, render_factor:int=36)->ndarray:
     94         render_sz = render_factor * self.render_base
---> 95         model_image = self._model_process(self.model, orig=filtered_image, sz=render_sz, gpu=self.gpu)
     96         if self.map_to_orig:
     97             return self._post_process(model_image, orig_image)

/content/DeOldify/fasterai/filters.py in _model_process(self, model, orig, sz, gpu)
     63         orig = VV_(orig[None])
     64         orig = orig.to(device=gpu)
---> 65         result = model(orig)
     66         result = result.detach().cpu().numpy()
     67         result = self._denorm(result)

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    487             result = self._slow_forward(*input, **kwargs)
    488         else:
--> 489             result = self.forward(*input, **kwargs)
    490         for hook in self._forward_hooks.values():
    491             hook_result = hook(self, input, result)

/content/DeOldify/fasterai/generators.py in forward(self, x)
    116     def forward(self, x:torch.Tensor):
    117         x, enc0, enc1, enc2, enc3 = self._encode(x)
--> 118         x = self._decode(x, enc0, enc1, enc2, enc3)
    119         return x
    120 

/content/DeOldify/fasterai/generators.py in _decode(self, x, enc0, enc1, enc2, enc3)
    102         x = self.up2(x, enc2)
    103         enc1, padh, padw  = self._pad(enc1, x, padh, padw)
--> 104         x = self.up3(x, enc1)
    105         enc0, padh, padw  = self._pad(enc0, x, padh, padw)
    106         x = self.up4(x, enc0)

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    487             result = self._slow_forward(*input, **kwargs)
    488         else:
--> 489             result = self.forward(*input, **kwargs)
    490         for hook in self._forward_hooks.values():
    491             hook_result = hook(self, input, result)

/content/DeOldify/fasterai/modules.py in forward(self, up_p, x_p)
     92         up_p = self.tr_conv(up_p)
     93         x_p = self.x_conv(x_p)
---> 94         x = torch.cat([up_p,x_p], dim=1)
     95         x = self.relu(x)
     96         return self.out(x)

RuntimeError: CUDA out of memory. Tried to allocate 110.25 MiB (GPU 0; 14.73 GiB total capacity; 13.80 GiB already allocated; 33.94 MiB free; 51.32 MiB cached)

:( it runs out of memory after the first sample picture

Folding stats

Vigilo Confido

 

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, Nicnac said:

:( it runs out of memory after the first sample picture

Thats really strange, I wonder why its running out on yours but not on mine, they should more or less be the same...

I assume your Runtime > Change runtime type > is set to GPU.

Are you running the script fresh after it fails or closing out the tab and reopening it to try again?

I noticed that during the first run it updates # Work around with Pillow being preinstalled on these Colab VMs, causing conflicts otherwise.
!pip install Pillow==4.1.1

# Work around with Pillow being preinstalled on these Colab VMs, causing conflicts otherwise.
!pip install Pillow==4.1.1

to the 4.1.1 version from 4.0.0, could be related to that.

Other than that I'm not sure, I really didn't like how my photos came out of the 42 anyway and am running it at 30 like the dorothy photo.

It really shouldn't be running out of memory anyway, maybe try with just 1 for sure small picture to see how it goes first

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, imbrock said:

--

I have reset the runtime and am installing it all fresh now. maybe that will solve it? Downloading the weights always takes soo long ...

do those always have to be loaded into ram?

Also pls explain what that cu100 setting did. Is it just the driver version for the gpu?

oh and pls dm me ur paypal so I can send you your coffee ?

Folding stats

Vigilo Confido

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Nicnac said:

I have reset the runtime and am installing it all fresh now. maybe that will solve it? Downloading the weights always takes soo long ...

do those always have to be loaded into ram?

Also pls explain what that cu100 setting did. Is it just the driver version for the gpu?

oh and pls dm me ur paypal so I can send you your coffee ?

Also, It worked now!! I was able to process ~200 photos at once!

Quite a few turned out very wonky but there are usable results!

Thanks so much for your help and all the time!

Folding stats

Vigilo Confido

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Nicnac said:

I have reset the runtime and am installing it all fresh now. maybe that will solve it? Downloading the weights always takes soo long ...

do those always have to be loaded into ram?

Also pls explain what that cu100 setting did. Is it just the driver version for the gpu?

oh and pls dm me ur paypal so I can send you your coffee ?

Yeah its a heck of a time to wait to see if the changes fixed anything.

I think it may be combining the loading times and the processing times, though there is a chance the github user content media bits are accessed a bit slower than most things we're used to.

Yeah the cu100 just bumped the version up to the newer one with support for newer gpus, apparently the older ones don't support tensor cores or something.

So glad we got that working for you. Its been a fun project. Cool i'll shoot you the link :)

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...

Hi to all.

Having problem to load multiple images one by one(from one folder) .Do not know which code to write in jupyter lab

My graphic card is 1050ti 4GB, 

with this code I am able to colorize photo but I want to speed up time cause have lot of grayscale photo to color....

Quote

 

#NOTE:  Max is 45 with 11GB video cards. 35 is a good default
render_factor=19
#NOTE:  Make source_url None to just read from file at ./video/source/[file_name] directly without modification
source_url=None
source_path ='C:/KOLOR/test_images/proba.jpg'
result_path ='C:/KOLOR/test_images/result_images/proba.jpg'

if source_url is not None:
    result_path = colorizer.plot_transformed_image_from_url(url=source_url, path=source_path, render_factor=render_factor, compare=True)
else:
    result_path = colorizer.plot_transformed_image(path=source_path, render_factor=render_factor, compare=True)
show_image_in_notebook(result_path)

 

 

tried with

Quote

for img in os.listdir("C:/KOLOR/test_images/"): img_path = "C:/KOLOR/test_images/" + str(img) if os.path.isfile(img_path): vis.plot_transformed_image(img_path)

not success

 

Please help... 

Link to comment
Share on other sites

Link to post
Share on other sites

this code doesnot trow out anything.
Quote

for img in os.listdir("C:/KOLOR/test_images"):
    img_path = "C:/KOLOR/test_images" + str(img)
if os.path.isfile(img_path):
    vis.plot_transformed_image(img_path)


I'm a beginner in python language and I do not know the syntax well, so if anyone wants to help me, I would be very grateful.
Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×