Sunday, May 7, 2017

Book I Chapter 2: Swift Basic Data Type

Chapter 2: Swift Basic Data Type

As mentioned in the previous chapter, there are 3 basic data type that Swift encourage us to use. They are String, Int and Double.


String is for text, Int is for integer and Double is for floating point number.


In addition to the 3 basic data type, we can also use unsigned integer represented by UInt and a smaller floating point number represented by Float.


For both signed and unsigned integer, they can be broken down into bit size integer. An eight bit signed integer is represented by Int8. For signed integer we have Int8, Int16, Int32 and Int64. Similarly for unsigned integer we also have UInt8, UInt16, UInt32 and UInt64.


Finally, we have Boolean that stores a true or false value.


String

In Swift, String is the data type that handles text. This is the most frequently use data type.


String Literals

String literals are basically the raw text that Swift could accept. Swift could accept all kinds of  text in unicode except backslash and double quotation.


Double quotation is used to enclose the text and it is the marker that marks the beginning and the end of text.


Example:


"This is a text"
"!@#$%^&*()_+=-:';><?,./|{}[]"




Unicode String

Swift can accept any character in the unicode.


Example:


"2010年7月,苹果开发者工具部门总监克里斯·拉特納开始着手 Swift 编程语言的設計工作,以一年時間,完成基本架構後,他領導了一個設計團隊大力參與其中。"


"The Swift Programming Language est un manuel de 500 pages disponible gratuitement sur iBookStore permettant de décrire les fonctionnalités du langage."


"その後アップル社内での4年間の開発期間を経て、2014年のWWDCにおいて一般に発表され、同時にアップルに開発者登録している開発者に対してベータ版の提供が開始された。"


"기존의 애플 운영체제용 언어인 오브젝티브-C와 함께 공존할 목적으로 만들어졌다."


"постала прва јавно доступна апликација написана у Свифту."




We can even include drawing box and symbol in the unicode.


Example:


"⊤⊛⊒⊍⊩"




If we know the unicode number we can use the unicode number using the following syntax:


"\u{<unicode_number>}"


Example:


"\u{2EE5}"
"\u{2F25}\u{2F26}"




Example 2:


"\u{2665}"
"\u{20AC}"
"\u{21A4}"
"\u{222D}"




Extended Grapheme Cluster

An extended grapheme cluster is a sequence of unicode to form a character. In a lot of foreign languages, words are formed by a combination of characters or stroke.


Example:
let tibetan1 = "\u{0F40}"
let tibetan2 = "\u{0F7D}"


let tibetan3 = "\u{0F40}\u{0F7D}"




Example 2:


"\u{65}"
"\u{301}"


"\u{65}\u{301}"


"\u{E9}"




Note: the example above shows that a combination of unicode 65 and 301 is equivalent to the character in unicode E9. More similar example below:


Example 3:


"\u{110B}"
"\u{1175}"


"\u{110B}\u{1175}"


"\u{C774}"




Example 4:


"\u{1112}"
"\u{1161}"
"\u{11AB}"


"\u{1112}\u{1161}\u{11AB}"


"\u{D55C}"




Escape Sequence

Since we need to enclose string with double quotation, we cannot use double quotation in a string. To mitigate this problem we create an escape sequence represented by backslash(\). Therefore, we cannot use double quotation (") and backslash (\) in a string.


To use these 2 symbol, we need to use backslash as the escape sequence. To use double quotation in a string we need to enter \" instead of ", similarly to use backslash in the string we need to enter \\ instead of \.


Example:


let sampleText1 = "She said \"The dateline is unreasonable.\" and I agreed."
print(sampleText1)


let sampleText2 = "To print double quotation in print function, we must use escape sequence \\ to force print \\ and \". "
print(sampleText2)






Create a String Constant



If we want a string value to remain the same and unchanged, then we should create a string constant.


We can create a string constant by using the following syntax:


let <constant_name> = <string literals/string expression>


Example:


let stringConstant = "This is a string constant"




We can also declared the constant first and assign the data later:


let <constant_name>:String


To assign the constant use the syntax:
<constant_name> = <string literals/string expression>


Example:


let stringConstant2: String
stringConstant2 = "This constant can only be assigned ONCE"




Create a String Variable

If we want a string value and we want it to change as and when required, then we should create a string variable.
String variable can be created using the following syntax:


var <variable_name> = <string literals/string expression>


Example:


var stringVariable1 = "Default string value"
stringVariable1 = "New string value"




Alternatively, we can also declared the variable first and assign data later

var <variable_name>:String


To assign the constant use the syntax:
<variable_name> = <string literals/string expression>


Example:


var stringVariable2: String
   
stringVariable2 = "This string can be changed anytime."


stringVariable2 = "Latest update."




Empty String



We can also create an empty string by just providing the double quotation:


var <variable_name> = ""


This is a better method instead of declaring a String without assignment. This method ensure that the string variable has been initialized and it can be used.


Example:


var stringVariable3 = ""




To check if a string constant/variable is empty, we use the property isEmpty


Example:


stringVariable3.isEmpty




Concatenating Strings

  • We can join strings using the plus operator (+).
  • We can join between string literals and/or string variables.


Example:


let stringSample1 = "This is a test." + " And this is the second part of a jointed string"


print(stringSample1)






Example 2: Joining between a string variable and string literals


let stringSample2 = "This is second test."


let stringSample3 = stringSample2 + " " + "Second part of second test string."


print(stringSample3)






Example 3: Join string between 2 string constant


let stringSample4 = "Hello, welcome to Swift. "


let stringSample5 = "I am your assistant."


let stringSample6 = stringSample4 + stringSample5


print(stringSample6)






  • We can also join string using the operator (+=). Using this operator we can append a new string into the original string.


Example 4:


var stringSample7 = "Hello, "


stringSample7 += "Welcome to the Star Fleet."


print(stringSample7)






Example 5:



var stringSample8 = "Hello, "

let stringSample9 = "Command Center"

let space = " "

stringSample8 += "Sir." + space + "Welcome to the "

stringSample8 += stringSample9 + "."

print(stringSample8)







Numeric Literals

Before we start discussing integers and floating point number, we should explore numeric literals. In programming, numeric literals are literally numbers in raw form such as 254, -58, 2.655.


In Swift, we accept numeric literals in different form. Beside decimal number, Swift also accept numeric number in binary, hexadecimal and octal form. To differentiate between decimal and other form of number, we must include the prefix for the following type of number:



  • Binary: 0b
  • Hexadecimal: 0x
  • Octal: 0o


For example, the number 255 can be written in the following form and it is acceptable in Swift:

  • Decimal 255
  • Binary 0b1111 1111
  • Hexadecimal 0xFF
  • Octal 0o377




We can declare constant or variable using these form of number.



let constantAA = 255
var variableBB = 0b11111111
let constantCC = 0xFF
var variableDD = 0o377




We can also mixed different numeric form during computation



let constantEE = 255 + 0b11111111 + 0xFF + 0o377
let constantFF = 255 + 255 + 255 + 255




We can also include different numeric form in the print statement using string interpolation:



print("The number of 0xFFFF is \(0xFFFF).")






We usually do not use other numeric form as data input. We may need to use them if we need to program certain functionality that requires other numeric form.


For most of our programming function, we stick to decimal form.


In addition, we can also use underscore (_) to split big number to improve readability. For example 10 billion is 10,000,000,000. We can represent the number by 10_000_000_000.




 

Integer

In Swift, integers are whole number. An integer data type accepts only whole number. For example integers accept number such as 54 or 57791.


Integers can be further classified into signed integers and unsigned integers. Signed integers accepts negative whole number and positive whole number. Example of signed integers are 57, -325 and -54623. Unsigned integers accepts only positive whole number such as 723 and 9394.


Signed integers are denoted by Int and unsigned integers are denoted by UInt.


In addition, Swift also provide bit sized integer. An unsigned integer with the size of 8 bit is denoted by UInt8. Swift provides 4 unsigned bit sized integer, they are UInt8, UInt16, UInt32 and UInt64. The larger the bit size, the larger the number can hold. For example, UInt8 only accepts number from 0 to 255, whereas UInt64 can accept number from 0 to 18446744073709551615.


Similarly Swift also provides 4 signed bit sized integer, they are Int8, Int16, Int32 and Int64. For signed integer, it can only hold about half the size of unsigned integer. This is because signed integer also accepts negative number. A signed integer accepts 1 and -1. Therefore Int8 can only accept number from -128 to 127.


In summary, we have the following types of integer. Int, UInt, Int8, Int16, Int32, Int64, UInt8, UInt16, UInt32 and UInt64.


We will start discussing the bit size integer before we come to UInt and Int.


Bit Based Integer

Basically we have 8 types of integer, 4 signed integer and 4 unsigned integer. They are Int8, Int16, Int32, Int64, UInt8, UInt16, UInt32 and UInt64.


What are the differences? The main differences is their bit size. UInt8 is only 8-bit long whereas UInt64 is 64-bit long. The larger the bit size the larger the number can be stored. For example UInt8 can stored number from 0 to 255, whereas, UInt32 can store number from 0 to 4294967295.


For signed integer, we can store even less number because we need to consider the negative number. Int8 need to accept 1 and -1. So Int8 can accept number from -128 to 127. Thus the storing capability is reduced almost by half for signed integer.


Listed below is the table that list down the acceptable number for each integer:



Swift Data Type
Unsigned Minimum
Unsigned Maximum
8 bit
UInt8
0
255
16 bit
UInt16
0
65535
32 bit
UInt32
0
4294967295
64 bit
UInt64
0
18446744073709551615






Signed Minimum
Signed Maximum
8 bit
Int8
-128
127
16 bit
Int16
-32768
32767
32 bit
Int32
-2147483648
2147483647
64 bit
Int64
-9223372036854775808
9223372036854775807


As we can see from the table, the larger the bit size the larger the number can be fitted into a variable.


Declare Bit Sized Integer

To declare bit sized integer, we must include data type annotation because without data type annotation, the system will infer any whole number to Int instead of Int16.


Data inference does not work with bit size integer.


The syntax is as follows:


let <constant_name>:<bit_sized_integer_type> = <numeric_literals/expression that derived into number>
Or
var <variable_name>:<bit_sized_integer_type> = <numeric_literals/expression that derived into number>


Please refer to the example below.


Example:



let smallestNumber:Int8 = 127
var smallNumber:Int16 = 32767
let bigNumber:UInt32 = 4294967295
var biggerNumber:UInt64 = 18446744073709551615




Working with Bit Sized Integer

It is uncommon for us to declare bit size integer unless we need a constant or variable that fixed with a particular size across all different platform (32-bit system and 64-bit system).


To find out what is the maximum or minimum number that an integer type can accept, we append the data type with .min for minimum number and .max for maximum number.


Please see the example below:



let minOfInt8 = Int8.min
let maxOfInt8 = Int8.max

let minOfUInt64 = UInt64.min
// For unsigned integer the min is always 0

let maxOfUInt64 = UInt64.max





We should not be using bit size integer unless under exceptional situation. We should use the default signed integer (Int) instead. Even if we are working on a very small number that would not exceed 100, we should not use Int8 or UInt8. Using Int8 does not provide any advantage compared to using Int.


In the olden days where memory are scared, using Int8 make sense so that we can save memory. Using Int8 will also not improve processing speed and in fact it may slow down the processing speed since most operating system are optimized for  64-bit CPU.


Unsigned Integer

As mentioned earlier, unsigned integer accepts only positive number. Unsigned integer in represented by UInt. But how big is UInt? Well, UInt changes according to the operating environment particularly the CPU size.


  • If a computer runs on a 32-bit processor, then UInt is equivalent to UInt32.
  • If a computer runs on a 64-bit processor, then UInt is equivalent to UInt64.


All Apple devices from MacBook to iPad are run by different processor. All modern Apple products (except Apple Watch) now runs on 64-bit processor. However, older Apple device may run on 32-bit processor.


Unsigned Integer Limit

If our apps are going to run on both 32-bit and 64-bit platform, then we need to find out what UInt limit for the particular platform. Similarly we can find out the maximum value by append .max to UInt



let maxOfUInt = UInt.max




From the result, we derived that UInt is equivalent to UInt64.


Please take note of this when we are developing apps that cater for older generation of iPad.


Using Unsigned Integer

To use unsigned integer, we must declared with data type annotation. Data type inference does not worked with unsigned integers.


The syntax is as follows:


let <constant_name>:UInt = <numeric_literals/expression that derived into number>
Or
var <variable_name>:UInt = <numeric_literals/expression that derived into number>


Example 1:



let someUnsignedNumber:UInt = 787





Example 2:



var someUnsignedVariable:UInt = 54

someUnsignedVariable = 768

print("The unsigned number is \(someUnsignedVariable).")







Unsigned integer can be printed with the print function by inserting the constant or variable within the parenthesis.


We also can print unsigned by via string interpolation.


Swift encourage us to use the default signed integer so as to provide better code interoperability. We should not use unsigned integer unless we need to run a big number and the number exceeded the maximum limit of the signed integer.


Even if we are only using positive number, we are recommended to use the default Int.  Using Int improves our code interoperability and prevent us from converting between different integers.


As mentioned in the previous section using UInt does not improve any processing speed for increased efficiency. Only use UInt when we need numbers larger than a signed integer can accept.

Signed Integer

Similarly, signed integer also derived from our CPU size.


  • If a computer runs on a 32-bit processor, then Int is equivalent to Int32.
  • If a computer runs on a 64-bit processor, then Int is equivalent to Int64.


To find out the maximum and minimum limit, we can do so by appending .max and .min.



let minLimitInt = Int.min
let maxLimitInt = Int.max




Data Type Inference

Signed integer (Int) is the default for data type inference. Any whole number will automatically inferred and assigned with data type Int. Therefore, we don't have to include data type annotation while declaring a constant or variable.


For better code interoperability, we should use Int when possible even of we are only using positive number.


Integer Overflow

Readers that familiar with Objective-C or C programming should know that if we add 1 to the maximum limit, the integer variable will overflow. In Swift, there is no integer overflow. If we add 1 to the maximum value it will just generate error.




Floating Point Number

Floating point number are fractional decimal numbers. For example, 3.1415, 0.22 and -5.23 are all floating point numbers. It is also advisable to use floating point number for division unless we want to discard the remainder.


In Swift programming, floating point number are represented by 2 data types. They are Double and Float. Double is a 64-bit floating point number that has a precision of at least 15 decimal places whereas Float is a 32-bit floating point number that has a precision of 6 decimal places.


Please also note floating point number does not limited by its CPU bit size. It is usually calculated by the floating point unit (a.k.a co-processor or math processor).


Floating Point Literals

For floating point number, it can be represented by decimal literals and hexadecimal literals with prefix of 0x.


Floating point literals for decimal number must have a decimal point with number on both side of the decimal point.


For example, the number 132 will be infer as Int. To let the system to infer the number into floating point number we must write 132 as 132.0.




In addition, we can also include exponent with an E or e. To write a number with exponent, the number 255 can be written as 2.55 x 102  or 255000 x 10-3.  In Swift programming, we can write number in exponent using E or e to represent x10. The number after E or e is the exponent.


For example, 2.55 x 102 can be written as 2.55e2, similarly 255000 x 10-3 can be written as 255000e-3.




Examples:



let fpLiteral2 = 3.1415

let fpLiteral3 = 2.55e2

let fpLiteral4 = 2.55E2

let fpLiteral5 = 25.64E4




More Example:



// Decimal: use e for exponential
let val22 = 2.1718
let val23 = 0.021718e2
let val24 = 217.18e-2




Floating point can also be written in hexadecimal form. To write floating point in hexadecimal we must prefix with 0x. In addition we need to write in exponent form using p as exponent instead. P is represented by 2x. In this case, x can be represented in decimal. Therefore, the number after p must be in decimal.


For example, for 255 can be written as 0xFFp0. The number 0xFFp3 is equivalent to 255 x 23 = 2040.




Additional example:



// Hex: use p for exponential
let val25 = 0xFFp2
let val26 = 0xFFp1
let val27 = 0xFFp0
let val28 = 0xFFp-1
let val29 = 0xFFp-2





More Example:



// More example: All the following numeric literals refer to the same number.
let val30 = 12.1875
let val31 = 1.21875e1
let val32 = 0xC.3p0




We can also use underscore to split the decimal places into more readable section.




Example:



// We can use _ and padded 0 for readability
let val34 = 000223.000655
let val35 = 000_223.000_655





Float

Float is a 32-bit floating point number that has a limited precision of 6 decimal places. Since Float has a limited precision, it is not recommended to use it unless absolutely necessary.


Using Float

To declare Float constant or variable, we need to include the data type annotation. Data type inference does not work with Float.


Syntax is as follows:
let <constant_name>:Float = <floating_point_number/expression_that_derived_into_floating_point>
Or
var <variable_name>:Float = <floating_point_number/expression_that_derived_into_floating_point>


Example:



let shortPi:Float = 3.14159265359

let longPi = 3.14159265359





As mentioned earlier, Float has a limited precision. If we enter a decimal number with long decimal digits, it will round up to the nearest 6 decimal places.


If the number we are computing does not exceed 2 decimal places, it is more efficient to use Float? The answer is not necessary, nowadays memory is much affordable, therefore we would not save much memory space if we stick to Float. CPU execution is a complex process, so using Float may not necessary help in faster execution. It may even slow down the execution with extra wait cycle.

Double

Double is a 64-bit floating point number that has a precision of at least 15 decimal places. It is the recommended data type for decimal number.


Using Double

To declare a constant or variable with data assignment, we do not need to include data type annotation since the system will infer any decimal number as Double.


Syntax is as follows:
let <constant_name> = <floating_point_number/expression_that_derived_into_floating_point>
Or
var <variable_name> = <floating_point_number/expression_that_derived_into_floating_point>


Example:



let someFPNumber = 2.1718

let pi = 3.1415

let radius = 1.2






Data Type Conversion

We cannot perform computation for variables with different data type. For example, we cannot add a floating point number (Double) with a variable that is an integer (Int). To be able to perform such computation, we must convert the integer to a floating point number. Similarly, we cannot add a variable that belongs to datatype Int8 with a UInt16 variable.  


For conversion between data type the syntax is as follows:


let <constant_name> = <datatype>(<numeric_literals/constant/variables>)
Or
var <variable_name> = <datatype>(<numeric_literals/constant/variables>)
Or
<declared_variable_name> = <datatype_same_as_variable_name>(<numeric_literals/constant/variables>)


The basic rule for data type conversion is as follow:
  • The data to be converted is acceptable to the data type it is converted to.


Therefore, we must know the data type limit so that we can perform conversion without error.


Conversion between Bit Sized Integer

To convert between bit size data, we need to use the above syntax.


Example:



let numberA:UInt8 = 87
let numberB:Int16 = 43
let numberC = 23

let numberD = Int16(12)

let numberE = Int32(numberA)
var numberF = UInt16(numberB)
let numberG = UInt64(numberC)

numberF = UInt16(numberD)




As shown in the example above, we can include numeric literal in the conversion bracket as shown in NumberD. To reassign the variable, we need to match the datatype of the converter with the variable on the left hand side of the equation as shown in numberF in the last statement. For declaration, please note that the variable will take whatever data type that is specified by the converter.


We can also declare constant or variable first and perform assignment later. Please note that the convert must be of the same datatype as the declared variable or constant.


Example:



let numberI:UInt64

numberI = UInt64(numberA)

var numberJ:Int16

numberJ = Int16(numberC)




We can also perform computation as shown below:



let numberH = UInt32(numberA) + UInt32(numberB) + UInt32(numberC)




In the above example, we can ignore the converter if any of the number is the data type we want. See another example below:


In the following example, we are addition a Int8, Int16 and UInt32. If the datatype of the summation number we want is in the operand (in this case UInt32), then we can ignore the converter.


Example:



let numberX:Int8 = 54
let numberY:UInt32 = 885665
let numberZ:Int16 = 2565

let numberXYZ = UInt32(numberX) + numberY + UInt32(numberZ)





Important: Please note that during conversion, the assigned data type is able to accept the converted number.


Example:



let numberK:UInt32 = 522

// The following statement will generate error since Int8 cannot accept any number larger than 128
let numberL = Int8(numberK)





More Example:



//: For data conversion use desiredDataType(InitialValueOrVariable)
let val36:UInt16 = 0xFFFE
let val37:UInt8 = 1
let val38:UInt16 = val36 + UInt16(val37)
//let val39 = val36 + 2 // This statement will generate error as it will exceed its maximum value
let val39 = Int(val36) + 2
// In the previous statement, we convert val36 from UInt16 to Int which allows more number.



Conversion between Integers

As discussed earlier, UInt and Int are derived from CPU capabilities. If the CPU is 32-bit, then UInt and Int is the same as UInt32 and Int32 respectively. Assuming our CPU is 64-bit, then we can use Int64 or Int interchangeably. However, it is advisable to stick to Int for interoperability.

To convert between UInt and Int, we can use the same syntax as mentioned before.


Example:



// Convert from Int to Uint
let numberM = 21546658785652

let numberN = UInt(numberM)




Example 2:

//Convert from UInt to Int
let numberO:UInt = 321654655

var numberP = Int(numberO)






Conversion between Floating Point

To convert between Float and Double, we use the same syntax.


Example:



// Convert from Float to Double
let shortDecimal1:Float = 2.658846542

let longDecimal1 = Double(shortDecimal1)




From the above example, we also notice that the decimal number is not exactly the same as the number originally input. This is due to the way computer works with floating point number. The last digit or 2 may not the accurate. In practice, we usually discard the last 2 decimal places in situation where decimal precision is important.


Example 2:



// Convert from Double to Float

let longDecimal2 = 9.32655646655879523

let shortDecimal2 = Float(longDecimal2)




To convert from long decimal places to short decimal places, usually the long form is being truncated.


Converting Floating Point and Integers

The following example demonstrates conversion from integer to floating point number (double).


Example:



// Convert from Int to Double

let number1 = 2546

let number2 = Double(number1)

print(number2)





If we look the result on the right the display number is the same. If we use print statement, it will print the number with .0 since the constant is a double.


To convert from floating point number to integer, please note that the system will drop the decimals and keep the whole number. The system will not round up to the nearest whole number. It will just drop the decimal places.


Example:



// Convert from Double to Int

let number3 = 2.99464513

let number4 = Int(number3)

print(number4)





Data conversion during computation



Similarly, while perform computation and data type conversion, we do not need to convert the operand that has the data type we want.


Example:



let number10 = 6
let number11 = 0.654596

let number12 = Double(number10) + number11




More example:



// To perform computation with integer and floating point always convert the integer to floating point
let val40 = 2
let val41 = 0.1718
let val42 = Double(val40) + val41
// The previous statement will generate error if we do not convert val40 to Double as we cannot add Integer with Double

// The following statement produce no error because the datatype is infer during compilation time and the compiler will infer to the most appropriated type
let val43 = 2 + 0.1718





Conversion between String and Numbers



To convert from text to number, the text must be in proper number:


Example:



// Convert from String to Double / Int
let text1 = "12.345"

let number5 = Double(text1)

let number6 = Int(text1)

let text2 = "556"

let number7 = Int(text2)





As shown in the example above, the string "12.345" cannot be converted to Int.


Example:



// Convert from Double / Int to String
let number8 = 256

let text3 = String(number8)

let number9 = 2.78846562

var text4 = String(number9)





More example:



// To use numbers with string, we need to explicitly convert from a numeric data type to string
let myName = "Thomas"
let mySocialSecurity = 12532455
let myLine1 = myName + " Social Security number is " + String(mySocialSecurity)

// or we can string interpolation
let myline2 = "\(myName)'s Social Security number is \(mySocialSecurity)"

let hisName1 = "Steve"
let hisEarnings:Float = 145258
let myLine3 = "\(hisName1) just earned $\(hisEarnings * 0.25) for this project."




Comprehensive Type Casting

Type casting is also known as data type type conversion. The following example presents comprehensive type casting:



//: Comprehensive Type Casting
// Convert from strings to integer
let str1 = "9384"
let num1 = Int(str1)
num1

// Convert from integer to string
let num2 = 5478955
let str2 = String(num2)
str2

// Convert from string to 32-bit integer
let str3 = "454234"
let num3 = Int32(str3)

// Convert from default integer to 16-bit integer
let numi1 = 54
let numi2 = Int16(numi1)

// Convert from default integer to 8-bit integer
let numi3 = 12
let numi4 = Int8(numi3)

// Convert from default 8-bit integer to default integer
let numi5 = numi4
let numi6 = Int(numi5)

// Convert from default double to float
let num4 = 25.3
let num5 = Float(num4)

// Convert from float to double
let num6 = num5
let num7 = Double(num6)

// Convert from default double to integer. Please note that the decimals will be trancated.
let num8 = 32.946
let num9 = Int(num8)

// Convert from integer to double. No change in value.
let num10 = 2145
let num11 = Double(num10)

// Convert from double to string
let num12 = 32.65
let str4 = String(num12)

// Convert from string (with decimals) to float and double. Please note that float can accept fewer decimals
let str5 = "2145.3256525656"
let num14 = Float(str5)
let num15 = Double(str5)





Boolean

Swift also support the Boolean type. This data type is annotated as Bool. Unlike some other language such as C where a false is represented by 0 and true is represented by number other than 0; in Swift true and false must be explicitly expressed. True must be expressed as true and false must be expressed as false.


Example:



let myFact = true
let myNonFact = false





Hash Value

Although boolean data type is explicitly define and express as true or false. Internally a true value has a hash value of 1 and a false value has a hash value of 0.


We can read the hash value but we cannot amend it. In addition, using print function will only print true or false statement value. Hash value cannot be printed.


Example:



let myTruth = true
let myNonTruth = false

myTruth.hashValue

myNonTruth.hashValue

print(myTruth)

print(myNonTruth)






*** End of Chapter ***

Previous Chapter Next Chapter