In its simplest sense Typecasting is altering a computer's interpretation of data by implicitly or explicitly changing its data type; for example, by changing an `int` to a `float` and vice verse.
To better understand typecasting, we must start with data types themselves. In programming languages like C, every variable has some kind of `type` that determines how the computer and the user interprets that variable. Each of these data types, for instance `int`, `long long`, `float` and `double` all have their own unique characteristics and are use to handle data types of various ranges and precision.
Typecasting allows us to take a floating point number, like 3.14, and specifying the number before the decimal - 3 - by parsing it to an `int`.
Let's us an example from the English language to better clarify what we mean.
example.
WIND
Each carefully manipulated line in the example above forms a unique symbol. However, these symbols are immediately identifiable to those fluent in a Romance language as letters. We implicitly understand the data type `letter`.
Even more interesting, reviewing the string of `letter` data type symbols composing the example above, we can see that two very different, specific data types are formed. Each of the two words that are formed has a completely different meaning, connotation, pronunciation and history.
There is the noun wind, as in: "The wind blows outside". Yet there is also the verb wind, as in: "Wind up that spool".
This is a valuable analogy inasmuch as it leads us to understand that how we type the data determines how we use that data. The `noun` data type of WIND ought to be used in very different circumstances than the `verb` data type of WIND.
Setting aside more advanced topics such as Natural Language Processing for a moment, let's take for granted that computers do not care about English grammar. Computer programming languages, such as C, rely on the same idea - taking the same bit of data, and using it very differently based on how we cast the `type` of that data.
Here are most common data types of a 32 bit operating system:
1 byte : char
4 bytes : int, float
8 bytes : long long, double
Each byte represents 8 bits of memory storage in a 32 bit OS. Thus, an variable of type `int` will use 32 bits of memory when it is stored. As long as that variable remains of type `int`, the processor will always be able to convert that variable back to its' relevant number. However, we could in theory cast those same 32 bits into a series of boolean operators. As a result, the computer would no longer see a number in that address space, but an altogether different sequence of binary characters. We could then try to read that data as a different numeric type, or even as a string of four characters.
When dealing with numbers and type casting, it is vital to understand how the *precision* of your value will be effected. Keep in mind that the precision can stay the same, or you can lose precision - as in our float 3.14 to int 3 example at the very beginning of our discussion. You cannot, however, gain precision. The data to do so simply does not exist in the addressed memory space you would be attempting to pull it from.
Let's review the 3 most common ways you can lose precision.
Casting a float to an int would cause truncation of everything after the decimal point, leaving us with a whole number.
To perform a float to int conversion, we can perform the following simple operation:
example.
float -> int
float x = 3.7;
(int)x;
In the scenario above, (int)x = 3, because we will have truncated all values after the decimal point.
We can also convert a long long to an int:
long long -> int
As before, this will lead to a loss of higher-order bits. a long long takes up 8 bytes or 64 bits in memory.
Similarly a double can be cast a a float:
double -> float
This will give you the closest possible float to the double without rounding. A double will allow you to store 53 bits or 16 significant digits, while a float has 24 significant bits.
Because floats can only store 24 significant bits, you can only store number up to the value of 2^24 AKA two to the power of twenty-four AKA 16777217.
EXPLICIT VS IMPLICIT CASTING
Explicit casting is when we write the data type in parentheses before the variable name, like so:
(int)x -> explicit casting
Implicit casting is when the compiler automatically changes similar types to a "super-type", or performs some other form of casting without requiring any additional code from the user to perform the operation.
For example when we write the following:
5 + 1.1 -> implicit casting
The values already have types associated with them. 5 is an `int`, while 1.1 is a `float`. In order to add the two of them together, the computer implicitly casts the `int` 5 into a `float`:
(float)5.0 + (float)1.1 -> implicit casting
Implicit casting also allows us to assign variables of different types to each other. We can always assign a less precise type into a more precise one. For instance:
example.
double x;
int y;
We cannot take from this example that `x=y`, because a `double` has more precision than an `int`. On the other hand, it would also be problematic to say `y=x`, because `y` might have a larger value than `x`, and may not be able to hold all of the information stored in the double.
Type casting is also used in comparison operators such as:
< LESS THAN
> GREATER THAN
== EQUAL TO
example.
if (5.1 > 5)
The example above will be returned as one, because the compiler will implicitly cast 5 to a float in order to compare the two numbers. The same would be true of this example as well:
if(2.0 == 2)
Also, don't forget that `int's can be cast to `char's or ASCII values. `char's also need to be reduced to binary, which is why you can easily convert between char's and their respective ASCII values.
No comments:
Post a Comment