Jump to content

More bits, more numbers. So where is more than 64 bit computation?

Gat Pelsinger
Go to solution Solved by Eigenvektor,

Nothing is stopping you from doing this, other than maybe convenience. You can allocate any arbitrary number of bits and treat them as a number, but you'll have to take care of e.g. carry over if you add two such numbers, since the CPU has no native support for it. Even just basic addition, subtraction, multiplication and division will take take time to implement.

 

Of course ideally you'll want to use a library (or write your own) that takes care of this under the hood. For example in Java you have classes like BigInteger, which can theoretically represent integers of (almost) arbitrary length.

 

Internally it uses an int[] to be able to use more than 32 bit, so technically you're limited by the maximum size of an array (limited by the JVM and available memory). So the maximum number it can represent is as high as 2(32 x 2,147,483,642). Just keep in mind that one such number on its own would already require roughly 8 GiB of memory.

 

Realistically, you'll find few use cases (other than maybe scientific) where 232 or 264 isn't big enough for your needs.

The more bits you add, the higher number you can represent. So, what is stopping me from allocating more than 64 bits to a variable and letting me count beyond signed 9.2 quintillion or unsigned 18.4 quintillion?

 

Microsoft owns my soul.

 

Also, Dell is evil, but HP kinda nice.

Link to comment
Share on other sites

Link to post
Share on other sites

Nothing stops you.  We use 32 bits and 64 bits because it's FAST, due to processors having 32 bit or 64 bit registers, 32 or 64 bit data paths to caches and memory and so on ... so it's the least amount of cpu cycles to perform an operation. When you work with sizes bigger than register sizes, operations have to be done in multiple cycles. 

 

There are libraries that work with more than 64 bits variable, for example the bigNum library. 

 

There are libraries that keep numbers in their string representation, so you can add or substract by manipulating characters. 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Nothing is stopping you from doing this, other than maybe convenience. You can allocate any arbitrary number of bits and treat them as a number, but you'll have to take care of e.g. carry over if you add two such numbers, since the CPU has no native support for it. Even just basic addition, subtraction, multiplication and division will take take time to implement.

 

Of course ideally you'll want to use a library (or write your own) that takes care of this under the hood. For example in Java you have classes like BigInteger, which can theoretically represent integers of (almost) arbitrary length.

 

Internally it uses an int[] to be able to use more than 32 bit, so technically you're limited by the maximum size of an array (limited by the JVM and available memory). So the maximum number it can represent is as high as 2(32 x 2,147,483,642). Just keep in mind that one such number on its own would already require roughly 8 GiB of memory.

 

Realistically, you'll find few use cases (other than maybe scientific) where 232 or 264 isn't big enough for your needs.

Remember to either quote or @mention others, so they are notified of your reply

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×