I would say yes, it is. The most reasonable view we have of consciousness/sentience is that it arises from being able to process abstract/meta versus concrete concepts, such as this very question. It may be that an extremely elaborate and intelligent computer program could be created that is not sentient, or that is sentient but not human (this is the primary issue in the topic of AI safety, an AI that is sentient but does not share human instincts/values), but if it was a biological simulation of the human brain then there isn't any fundamental difference from a human.
Here's how I would think about it. Let's say that in this scenario, we can also upload an existing person's brain into this simulation where they can live forever. Would we call that person, who is now a computer simulation, no longer human? I feel that we would, instead, treat them the same way as when they were in a physical body. If you agree with this, then there's no reason why a simulated brain that never actually existed would be any different from a moral standpoint.