Float

What Does Float Mean?

In computer science, a float is a data type composed of a number that is not an integer, because it includes a fraction represented in decimal format.

Advertisements

One of the most common definitions given from experts is that a float “has numbers on both sides of the decimal.” However, it may be simpler to say that the float includes decimal fractions, where the integer does not.

Some point out that the float data type is used in computer programming when more precision is needed than what integers can provide.

Techopedia Explains Float

Since the early days of computer programming, floats have provided the ability to hold numbers including decimal fractions as data types. Understanding the nature of the float is vital in type conversion, in declaring variables, and in using variables within a codebase.

If the data types are not correctly handled, errors can result.

Early examples of using the float include FORTRAN, where the float data type was referred to as “double precision.”

There was also a “real” data type indicating a single precision floating point number.

Another early language utilizing the float was COBOL, which is still very much in use by many institutions, simply because of aversion to migrating legacy systems. A Medium article talking about the widespread use of COBOL makes an excellent point about how valuable float data types can be in software.

For a direct example, let's think about an IRS data program and whether it would use a float or not.

If the IRS requirements do not require reporting of a portion of a dollar, an integer format is entirely sufficient. Variables could all be integers, and float presentation can be avoided, facilitating some efficiencies in code.

On the other hand, where the program would need to report a portion of a dollar, programmers would need to declare a variable as a float and hold both the dollars and cents according to decimal format. For example, a float variable for $10.50 would be declared holding the contents 10.5.

Now consider if the data types are not correct. If the program is trying to hold 10.5 as an integer, it may hold "10" or generate an error. The parameters need to be designed according to the real data that will be deployed and procedures that will be implemented in the system.

Through the years, as computer programming involved, the use of floats and other data types were optimized for various kinds of memory usage. However, going back to Marianne Bellotti’s COBOL article, the point remains that accommodating float variables takes work and may lead to all kinds of debates about the best programming language or environment for a given system.

In the days of containers and virtual machines, it seems highly counterintuitive that a codebase running in these environments would not have the ability to handle decimal numbers, but programmers will have to make the evaluations and make the right decisions in:

  • Designing new systems.
  • Migrating legacy systems.
  • Or in doing routine maintenance on systems that use this type of data.
Advertisements

Related Terms

Latest Computer Science Terms

Related Reading

Margaret Rouse

Margaret Rouse is an award-winning technical writer and teacher known for her ability to explain complex technical subjects to a non-technical, business audience. Over the past twenty years her explanations have appeared on TechTarget websites and she's been cited as an authority in articles by the New York Times, Time Magazine, USA Today, ZDNet, PC Magazine and Discovery Magazine.Margaret's idea of a fun day is helping IT and business professionals learn to speak each other’s highly specialized languages. If you have a suggestion for a new definition or how to improve a technical explanation, please email Margaret or contact her…