Jump to content

Installing Tensorflow

Sport Driver

Hello. 

I'm trying to install TensorFlow on Windows 10 and it's just not having it. Tried many things but would go anywhere.

I first used this this video and this video but it wouldn't do anything. Does anyone have any good recommendation what to do?

PC: R7 5800X, AMD RX480 4GB, 32 GB RAM, 1TB 970 EVO, 500 GB 860 EVO, 500 GB HDD, RM 750 PSU

Laptop: Lenovo Ideapad 510s 14":  i5 7200U, 8 GB RAM, 500 GB 860 EVO

Phone: Samsung Galaxy S20 FE 4G

Link to comment
Share on other sites

Link to post
Share on other sites

I've found the Puget systems guide on it helpful when I hit some issues getting CUDA installed for it on a Linux machine, so you could try their Windows 10 guide: https://www.pugetsystems.com/labs/hpc/How-to-Install-TensorFlow-with-GPU-Support-on-Windows-10-Without-Installing-CUDA-UPDATED-1419/.

 

Skip out the CUDA/Nvidia driver stuff and install `tensorflow` not `tensorflow-gpu` if you don't have an Nvidia GPU.

CPU: 6700k GPU: Zotac RTX 2070 S RAM: 16GB 3200MHz  SSD: 2x1TB M.2  Case: DAN Case A4

Link to comment
Share on other sites

Link to post
Share on other sites

you sure it doesn't do anything? tensorflow requires long time to install and training a model takes even longer. I think it would be best to do a real world project to get a grasp of how it works. 

 

Follow this guide. It is about training a neural network that can tell if a user likes a movie or not by what comments they posted.

https://github.com/bentrevett/pytorch-sentiment-analysis/blob/master/3 - Faster Sentiment Analysis.ipynb 

 

You need to install all the dependencies first

pip install torchtext;
python -m spacy download en;
pip install transformers;

 

This is a python script that is mostly a copy and paste using the code snippets from the github documentation link above. I used it in a flask app I made. 

#!/usr/bin/python

import torch
from torchtext import data
from torchtext import datasets
import random
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import time
import spacy
import flask

app = flask.Flask(__name__)


# Set to false to disable training, important to turn it off after training it once or you will wait another hour
TO_TRAIN = True

# The more epochs the more training will be done. Note that without a GPU, each epoch will take upwards 10 minutes.
N_EPOCHS = 5

# File name for the model
FILE_NAME = 'avg_model.pt'

def generate_bigrams(x):
    n_grams = set(zip(*[x[i:] for i in range(2)]))
    for n_gram in n_grams:
        x.append(' '.join(n_gram))
    return x

SEED = 1234
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True

TEXT = data.Field(tokenize = 'spacy', preprocessing = generate_bigrams)
LABEL = data.LabelField(dtype = torch.float)

train_data, test_data = datasets.IMDB.splits(TEXT, LABEL)
train_data, valid_data = train_data.split(random_state = random.seed(SEED))

MAX_VOCAB_SIZE = 25_000

TEXT.build_vocab(train_data,
                 max_size = MAX_VOCAB_SIZE,
                 vectors = "glove.6B.100d",
                 unk_init = torch.Tensor.normal_)

LABEL.build_vocab(train_data)

BATCH_SIZE = 64

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
    (train_data, valid_data, test_data),
    batch_size = BATCH_SIZE,
    device = device)

class FastText(nn.Module):
    def __init__(self, vocab_size, embedding_dim, output_dim, pad_idx):

        super().__init__()

        self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx=pad_idx)

        self.fc = nn.Linear(embedding_dim, output_dim)

    def forward(self, text):

        #text = [sent len, batch size]

        embedded = self.embedding(text)

        #embedded = [sent len, batch size, emb dim]

        embedded = embedded.permute(1, 0, 2)

        #embedded = [batch size, sent len, emb dim]

        pooled = F.avg_pool2d(embedded, (embedded.shape[1], 1)).squeeze(1)

        #pooled = [batch size, embedding_dim]

        return self.fc(pooled)

INPUT_DIM = len(TEXT.vocab)
EMBEDDING_DIM = 100
OUTPUT_DIM = 1
PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]

model = FastText(INPUT_DIM, EMBEDDING_DIM, OUTPUT_DIM, PAD_IDX)

def count_parameters(model):
    return sum(p.numel() for p in model.parameters() if p.requires_grad)

print(f'The model has {count_parameters(model):,} trainable parameters')

pretrained_embeddings = TEXT.vocab.vectors

model.embedding.weight.data.copy_(pretrained_embeddings)

UNK_IDX = TEXT.vocab.stoi[TEXT.unk_token]

model.embedding.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM)
model.embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM)

optimizer = optim.Adam(model.parameters())

criterion = nn.BCEWithLogitsLoss()

model = model.to(device)
criterion = criterion.to(device)

def binary_accuracy(preds, y):
    """
    Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8
    """

    #round predictions to the closest integer
    rounded_preds = torch.round(torch.sigmoid(preds))
    correct = (rounded_preds == y).float() #convert into float for division
    acc = correct.sum() / len(correct)
    return acc

def train(model, iterator, optimizer, criterion):

    epoch_loss = 0
    epoch_acc = 0

    model.train()

    for batch in iterator:

        optimizer.zero_grad()

        predictions = model(batch.text).squeeze(1)

        loss = criterion(predictions, batch.label)

        acc = binary_accuracy(predictions, batch.label)

        loss.backward()

        optimizer.step()

        epoch_loss += loss.item()
        epoch_acc += acc.item()

    return epoch_loss / len(iterator), epoch_acc / len(iterator)

def evaluate(model, iterator, criterion):

    epoch_loss = 0
    epoch_acc = 0

    model.eval()

    with torch.no_grad():

        for batch in iterator:

            predictions = model(batch.text).squeeze(1)

            loss = criterion(predictions, batch.label)

            acc = binary_accuracy(predictions, batch.label)

            epoch_loss += loss.item()
            epoch_acc += acc.item()

    return epoch_loss / len(iterator), epoch_acc / len(iterator)

def epoch_time(start_time, end_time):
    elapsed_time = end_time - start_time
    elapsed_mins = int(elapsed_time / 60)
    elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
    return elapsed_mins, elapsed_secs

if TO_TRAIN:
    print("BEGIN TRAINING FOR " + str(N_EPOCHS) + " epochs")
    best_valid_loss = float('inf')

    for epoch in range(N_EPOCHS):

        start_time = time.time()

        train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
        valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)

        end_time = time.time()

        epoch_mins, epoch_secs = epoch_time(start_time, end_time)

        if valid_loss < best_valid_loss:
            best_valid_loss = valid_loss
            torch.save(model.state_dict(), FILE_NAME)

        print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
        print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%')
        print(f'\t Val. Loss: {valid_loss:.3f} |  Val. Acc: {valid_acc*100:.2f}%')

model.load_state_dict(torch.load(FILE_NAME))

test_loss, test_acc = evaluate(model, test_iterator, criterion)

print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%')

nlp = spacy.load('en')

def predict_sentiment(model, sentence):
    model.eval()
    tokenized = generate_bigrams([tok.text for tok in nlp.tokenizer(sentence)])
    indexed = [TEXT.vocab.stoi[t] for t in tokenized]
    tensor = torch.LongTensor(indexed).to(device)
    tensor = tensor.unsqueeze(1)
    prediction = torch.sigmoid(model(tensor))
    return prediction.item()

@app.route("/predict", methods=["POST"])
def viewResults():
    data = flask.request.json
    sentence = data["sentence"]
    result = predict_sentiment(model, sentence)
    return flask.jsonify(result=result)

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=3000, threaded=True)

 

For me, it took literally one hour to train the neural network. I have no idea of the theory and science behind it but surprisingly it does work. I can tell it something like "This move is so hip!" and it will correctly tell me that it means I liked the film. Pretty neat. 

 

Edit: here are tutorials relevant to tensorflow. I didn't realize pytorch is a different ML framework. Pick whichever one is to your liking and try them out. 

https://github.com/Hvass-Labs/TensorFlow-Tutorials/blob/master/README.md

 

 

Sudo make me a sandwich 

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...

Which version of python are you trying to run it?

 

On windows the 3.7.x is practically unstable

 

TensorFlow only ever worked for me in while using python up to 3.6.x

Link to comment
Share on other sites

Link to post
Share on other sites

  • 1 month later...

Hello, from my personal experience, I reccomend installing it by using Anaconda Navigator. Start by Creating a new environment and install tensorflow and desired packages using the navigator. Much more simple than other methods.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×