Monday, March 26, 2018

Swift 4 Introduction Series 1.3 - Swift Basic Data Type

Swift Data Type

There are 4 basic data type that Swift encourage programmer to use. They are String, Int, Double and Bool.


String is for text, Int is for integer, Double is for floating point number and Bool is to store boolean (true or false) values.


In addition to the 4 basic data type, we can also use unsigned integer represented by UInt and a smaller floating point number represented by Float.


For both signed and unsigned integer, they can be broken down into bit size integer. An eight bit signed integer is represented by Int8. For signed integer we have Int8, Int16, Int32 and Int64. Similarly for unsigned integer we also have UInt8, UInt16, UInt32 and UInt64.


The implementations of integers, floating point numbers and boolean are quite similar with many other C-type programming language such as C, C# or Objective-C. The implementation of String will be different with different programming language.


In this chapter, we will cover the basic of String and we will cover the details in the chapter "Working with Strings".


In this book, we will also introduces new data type that are not quite the same from other programming languages. Tuples is a data type that contain of a group of values. These values can be constructed using a combination of the other basic data type. We can group multiple values into a Tuple and pass multiple values as though we are working on a single variable. (Hint: for C programming Tuples is similar to struct).


Swift also introduces a data type called Optionals data type. Optionals handle situation when there is no value in a variable. It is similar to nil pointers in Objective-C but optionals applies to all Swift data types. Optionals are safer than nil pointers in Objective-C. Using if statement and optionals, Swift introduces a programming technique called optional binding that provide safe checking for all variables. Optional binding is one of the key feature in Swift programming.


Integer

In Swift, integers are whole number. An integer data type accepts only whole number. For example integers accept number such as 54 or 57791.


Integers can be further classified into signed integers and unsigned integers. Signed integers accepts negative whole number and positive whole number. Example of signed integers are 57, -325 and -54623. Unsigned integers accepts only positive whole number such as 723 and 9394.


Signed integers are denoted by Int and unsigned integers are denoted by UInt.


In addition, Swift also provide bit sized integer. An unsigned integer with the size of 8 bit is denoted by UInt8. Swift provides 4 unsigned bit sized integer, they are UInt8, UInt16, UInt32 and UInt64. The larger the bit size, the larger the number can hold. For example, UInt8 only accepts number from 0 to 255, whereas UInt64 can accept number from 0 to 18446744073709551615.


Similarly Swift also provides 4 signed bit sized integer, they are Int8, Int16, Int32 and Int64. For signed integer, it can only hold about half the size of unsigned integer. This is because signed integer also accepts negative number. A signed integer accepts 1 and -1. Therefore Int8 can only accept number from -128 to 127.


In summary, we have the following types of integer. Int, UInt, Int8, Int16, Int32, Int64, UInt8, UInt16, UInt32 and UInt64.


Before we talk about these integers, we need to discuss about numeric literals which forms the raw data for integers.

Numeric Literals

Before we start discussing integers and floating point number, we should explore numeric literals. In programming, numeric literals are literally numbers in raw form such as 254, -58, 2.655.


In Swift, we accept numeric literals in different form. Beside decimal number, Swift also accept numeric number in binary, hexadecimal and octal form. To differentiate between decimal and other form of number, we must include the prefix for the following type of number:


Binary: 0b
Hexadecimal: 0x
Octal: 0o


For example, the number 255 can be written in the following form and it is acceptable in Swift:


Decimal 255
Binary 0b1111 1111
Hexadecimal 0xFF
Octal 0o377




We can declare constant or variable using these form of number.


let constantAA = 255
var variableBB = 0b11111111
let constantCC = 0xFF
var variableDD = 0o377




We can also mixed different numeric form during computation


let constantEE = 255 + 0b11111111 + 0xFF + 0o377
let constantFF = 255 + 255 + 255 + 255




We can also include different numeric form in the print statement using string interpolation:


print("The number of 0xFFFF is \(0xFFFF).")






We usually do not use other numeric form as data input. We may use them if we need to program certain functionality that requires other numeric form.


For most of our programming function, we stick to decimal form.


In addition, we can also use underscore (_) to split big number to improve readability. For example 10 billion is 10,000,000,000. We can represent the number by 10_000_000_000.




Next, we will explore bit size integer before we learn about UInt and Int.


Bit Based Integer

Basically we have 8 types of integer, 4 signed integer and 4 unsigned integer. They are Int8, Int16, Int32, Int64, UInt8, UInt16, UInt32 and UInt64.


What are the differences? The main differences is their bit size. UInt8 is only 8-bit long whereas UInt64 is 64-bit long. The larger the bit size the larger the number can be stored. For example UInt8 can stored number from 0 to 255, whereas, UInt32 can store number from 0 to 4294967295.


For signed integer, we can store even less number because we need to consider the negative number. Int8 need to accept 1 and -1. So Int8 can accept number from -128 to 127. Thus the storing capability is reduced almost by half for signed integer.


Listed below is the table that list down the acceptable number for each integer:



Swift Data Type
Unsigned Minimum
Unsigned Maximum
8 bit
UInt8
0
255
16 bit
UInt16
0
65535
32 bit
UInt32
0
4294967295
64 bit
UInt64
0
18446744073709551615






Signed Minimum
Signed Maximum
8 bit
Int8
-128
127
16 bit
Int16
-32768
32767
32 bit
Int32
-2147483648
2147483647
64 bit
Int64
-9223372036854775808
9223372036854775807


As we can see from the table, the larger the bit size the larger the number can be fitted into a variable.


*****

Declare Bit Sized Integer

To declare bit sized integer, we must include data type annotation because without data type annotation, the system will infer any whole number to Int instead of Int16.


Data inference does not work with bit size integer.


The syntax is as follows:


let <constant_name>:<bit_sized_integer_type> = <numeric_literals/expression that derived into number>
Or
var <variable_name>:<bit_sized_integer_type> = <numeric_literals/expression that derived into number>


Please refer to the example below.


Example:


let smallestNumber:Int8 = 127
var smallNumber:Int16 = 32767
let bigNumber:UInt32 = 4294967295
var biggerNumber:UInt64 = 18446744073709551615




Working with Bit Sized Integer

It is uncommon for us to declare bit size integer unless we need a constant or variable that fixed with a particular size across all different platform (32-bit system and 64-bit system).


To find out what is the maximum or minimum number that an integer type can accept, we append the data type with .min for minimum number and .max for maximum number.


Please see the example below:


let minOfInt8 = Int8.min
let maxOfInt8 = Int8.max

let minOfUInt64 = UInt64.min
// For unsigned integer the min is always 0

let maxOfUInt64 = UInt64.max




We should not be using bit size integer unless under exceptional situation. We should use the default signed integer (Int) instead. Even if we are working on a very small number that would not exceed 100, we should not use Int8 or UInt8. Using Int8 does not provide any advantage compared to using Int.

In the olden days where memory are scared, using Int8 make sense so that we can save memory. Using Int8 will also not improve processing speed and in fact it may slow down the processing speed since most operating system are optimized for  64-bit CPU operations.


Unsigned Integer

As mentioned earlier, unsigned integer accepts only positive number. Unsigned integer is represented by UInt. But how big is UInt? Well, UInt changes according to the operating environment particularly the CPU size.


  • If a computer runs on a 32-bit processor, then UInt is equivalent to UInt32.
  • If a computer runs on a 64-bit processor, then UInt is equivalent to UInt64.


All Apple devices from MacBook to iPad are run by different processor. All modern Apple products (except Apple Watch) now runs on 64-bit processor. However, older Apple device may run on 32-bit processor.


Unsigned Integer Limit

If our apps are going to run on both 32-bit and 64-bit platform, then we need to find out what UInt limit for the particular platform. Similarly we can find out the maximum value by append .max to UInt


let maxOfUInt = UInt.max




From the result, we derived that UInt is equivalent to UInt64.


Please take note of this when we are developing apps that cater for older generation of iPad or Apple watch.


Using Unsigned Integer

To use unsigned integer, we must declared with data type annotation. Data type inference does not worked with unsigned integers.


The syntax is as follows:


let <constant_name>:UInt = <numeric_literals/expression that derived into number>
Or
var <variable_name>:UInt = <numeric_literals/expression that derived into number>


Example 1:


let someUnsignedNumber:UInt = 787


Example 2:


var someUnsignedVariable:UInt = 54

someUnsignedVariable = 768

print("The unsigned number is \(someUnsignedVariable).")






Unsigned integer can be printed with the print function by inserting the constant or variable within the parenthesis.


We also can print unsigned integer via string interpolation.


Swift encourage us to use the default signed integer so as to provide better code interoperability. We should not use unsigned integer unless we need to run a big number and the number exceeded the maximum limit of the signed integer.


Even if we are only using positive number, we are recommended to use the default Int.  Using Int improves our code interoperability and prevent us from converting between different integers.


As mentioned in the previous section using UInt does not improve any processing speed for increased efficiency. Only use UInt when we need numbers larger than a signed integer can accept.


Signed Integer

Similarly, signed integer also derived from our CPU size.


  • If a computer runs on a 32-bit processor, then Int is equivalent to Int32.
  • If a computer runs on a 64-bit processor, then Int is equivalent to Int64.


To find out the maximum and minimum limit, we can do so by appending .max and .min.


let minLimitInt = Int.min
let maxLimitInt = Int.max




Data Type Inference

Signed integer (Int) is the default for data type inference. Any whole number will automatically inferred and assigned with data type Int. Therefore, we don't have to include data type annotation while declaring a constant or variable.


For better code interoperability, we should use Int when possible even of we are only using positive number.


Integer Overflow

Readers that familiar with Objective-C or C programming should know that if we add 1 to the maximum limit, the integer variable will overflow. In Swift, there is no integer overflow. If we add 1 to the maximum value it will just generate error.


Screen Shot 2017-11-03 at 22.59.30.png


Screen Shot 2017-11-03 at 23.05.38.png

Floating Point Number

Floating point number are fractional decimal numbers. For example, 3.1415, 0.22 and -5.23 are all floating point numbers. It is also advisable to use floating point number for division unless we want to discard the remainder.


In Swift programming, floating point number are represented by 2 data types. They are Double and Float. Double is a 64-bit floating point number that has a precision of at least 15 decimal places whereas Float is a 32-bit floating point number that has a precision of 6 decimal places.


Please also note floating point number does not limited by its CPU bit size. It is usually calculated by the floating point unit (a.k.a co-processor or math processor).


Floating Point Literals

For floating point number, it can be represented by decimal literals and hexadecimal literals with prefix of 0x.


Floating point literals for decimal number must have a decimal point with number on both side of the decimal point.


For example, the number 132 will be infer as Int. To let the system to infer the number into floating point number we must write 132 as 132.0.




In addition, we can also include exponent with an E or e. To write a number with exponent, the number 255 can be written as 2.55 x 102  or 255000 x 10-3.  In Swift programming, we can write number in exponent using E or e to represent x10. The number after E or e is the exponent. We can use E or e as they represent the same thing.


For example, 2.55 x 102 can be written as 2.55e2, similarly 255000 x 10-3 can be written as 255000e-3.




Examples:


let fpLiteral2 = 3.1415

let fpLiteral3 = 2.55e2

let fpLiteral4 = 2.55E2

let fpLiteral5 = 25.64E4




More Example:


// Decimal: use e for exponential
let val22 = 2.1718
let val23 = 0.021718e2
let val24 = 217.18e-2




Floating point can also be written in hexadecimal form. To write floating point in hexadecimal we must prefix with 0x. In addition we need to write in exponent form using p as exponent instead of e. P is represented by 2exp. In this case, exp can be represented in decimal. Therefore, the number after p must be in decimal.


For example, for 255 can be written as 0xFFp0. The number 0xFFp3 is equivalent to 255 x 23 = 2040.




Additional example:


// Hex: use p for exponential
let val25 = 0xFFp2
let val26 = 0xFFp1
let val27 = 0xFFp0
let val28 = 0xFFp-1
let val29 = 0xFFp-2




More Example:


// More example: All the following numeric literals refer to the same number.
let val30 = 12.1875
let val31 = 1.21875e1
let val32 = 0xC.3p0




We can also use underscore to split the decimal places into more readable section.




Example:


// We can use _ and padded 0 for readability
let val34 = 000223.000655
let val35 = 000_223.000_655




Float

Float is a 32-bit floating point number that has a limited precision of 6 decimal places. Since Float has a limited precision, it is not recommended to use it unless absolutely necessary.


Using Float

To declare Float constant or variable, we need to include the data type annotation. Data type inference does not work with Float.


Syntax is as follows:
let <constant_name>:Float = <floating_point_number/expression_that_derived_into_floating_point>
Or
var <variable_name>:Float = <floating_point_number/expression_that_derived_into_floating_point>


Example:


let shortPi:Float = 3.14159265359

let longPi = 3.14159265359




As mentioned earlier, Float has a limited precision. If we enter a decimal number with long decimal digits, it will round up to the nearest 6 decimal places.


If the number we are computing does not exceed 2 decimal places, it is more efficient to use Float? The answer is not necessary, nowadays memory is much affordable, therefore we would not save much memory space if we stick to Float. CPU execution is a complex process, so using Float may not necessary help in faster execution. It may even slow down the execution with extra wait cycle.

Double

Double is a 64-bit floating point number that has a precision of at least 15 decimal places. It is the recommended data type for decimal number.


Using Double

To declare a constant or variable with data assignment, we do not need to include data type annotation since the system will infer any decimal number as Double.


Syntax is as follows:
let <constant_name> = <floating_point_number/expression_that_derived_into_floating_point>
Or
var <variable_name> = <floating_point_number/expression_that_derived_into_floating_point>


Example:


let someFPNumber = 2.1718

let pi = 3.1415

let radius = 1.2




*****

String

In Swift, String is the data type that handles text. This is the most frequently use data type. Swift also includes a data type known as Character, that holds a single character. This section will briefly discuss the basic features. For more comprehensive examination, please refer to the chapter "Working with Strings".


String Literals

String literals are basically the raw text that Swift could accept. Swift could accept all kinds of  text in unicode except backslash(\), double quotation(") and triple double quotation (""").


Double quotation is used to enclose the text and it is the marker that marks the beginning and the end of text. Triple double quotation is used to enclose the text for multiline text.


Example:


"This is a text"

"!@#$%^&*()_+=-:';><?,./|{}[]"




Multiline String Literals

In Swift 4, String supports multiline literals. To create multiline literals, we enclose the entire text using triple double quotation. The beginning of multiline string must start in a new line after the opening triple double quotation.


Example:


let story = """
Title
We can include multi line text here.

Next paragraph

End
"""

print(story)


Screen Shot 2017-11-03 at 23.22.49.png


Unicode String

Swift can accept any character in the unicode.


Example:




If we know the unicode number we can use the unicode number using the following syntax:


"\u{<unicode_number>}"


Example:


"\u{2EE5}"
"\u{2F25}\u{2F26}"




Extended Grapheme Cluster

An extended grapheme cluster is a sequence of unicode to form a character. In a lot of foreign languages, words are formed by a combination of characters or stroke.


Example:


"\u{110B}"

"\u{1175}"

"\u{110B}\u{1175}"

"\u{C774}"




Escape Sequence

Since we need to enclose string with double or triple double quotation, we cannot use double or triple double quotation in a string. To mitigate this problem we create an escape sequence represented by backslash(\). Therefore, we cannot use double quotation ("), triple double quotation (""") and backslash (\) in a string.


To use these 3 symbol, we need to use backslash as the escape sequence. To use double quotation in a string we need to enter \" instead of ", similarly to use backslash in the string we need to enter \\ instead of \. To include triple double quotation, we can use \""" or \"\"\". We do not need escape sequence for double quotation in multiline text.

Create a String Constant



If we want a string value to remain the same and unchanged, then we should create a string constant.


We can create a string constant by using the following syntax:


let <constant_name> = <string literals/string expression>


Example:


let stringConstant = "This is a string constant"




Create a String Variable

If we want a string value and we want it to change as and when required, then we should create a string variable.
String variable can be created using the following syntax:


var <variable_name> = <string literals/string expression>


Example:


var stringVariable1 = "Default string value"
stringVariable1

stringVariable1 = "New string value"
stringVariable1




Empty String

We can create an empty string by just providing the double quotation:


var <variable_name> = ""


Alternatively, we can also use the syntax below. Both methods produce same effects.


var <variable_name> = String()


Example:


var stringVariable3 = ""
var stringVariable4 = String()




To check if a string constant/variable is empty, we use the property isEmpty


Example:


stringVariable3.isEmpty
stringVariable4.isEmpty


Concatenating Strings

  • We can join strings using the plus operator (+).
  • We can join between string literals and/or string variables.


Example:


let stringSample1 = "This is a test." + " And this is the second part of a jointed string"

print(stringSample1)






Compound Assignment in String



  • We can also join string using the operator (+=). Using this operator we can append a new string into the original string.


Example 4:


var stringSample7 = "Hello, "

stringSample7 += "Welcome to the Star Fleet."

print(stringSample7)





Boolean

Swift also support the Boolean type. This data type is annotated as Bool. Unlike some other language such as C where a false is represented by 0 and true is represented by number other than 0; in Swift true and false must be explicitly expressed. True must be expressed as true and false must be expressed as false.


Example:


let myFact = true
let myNonFact = false




Hash Value

Although boolean data type is explicitly define and express as true or false. Internally a true value has a hash value of 1 and a false value has a hash value of 0.


We can read the hash value but we cannot amend it. In addition, using print function will only print true or false statement value. Hash value cannot be printed.


Example:


let myTruth = true
let myNonTruth = false

myTruth.hashValue

myNonTruth.hashValue

print(myTruth)

print(myNonTruth)




Boolean for C Programmer

The way boolean work in Swift is different from C programming. In C programming we can use 0 and non zero to represent true and false. In Swift, we cannot evaluate and interpret an integer as boolean value.  The following statement will fail in Swift however it will work in C:
var x = 1
if x {...}


We need to expressly use if x == 1 to evaluate into boolean value.


let boolTest = false
// The following is the preferred method to test boolean variable
if boolTest == true
{
   print("boolTest is True")
}
else
{
   print("boolTest is False")
}




We can also use the following format without comparator since boolean is either True or False


if boolTest
{
   print("boolTest is True")
}
else
{
   print("boolTest is False")
}




The above example worked because boolTest is a boolean variable.


We cannot use the following, remove the comment to test


/*
let wrongWay1 = 1
if wrongWay1
{
   print("wrongWay1 is True")
}
else
{
   print("wrongWay1 is False")
}
*/




The above example will produce error however we can test integer using comparator as follows


let rightWay1 = 1
if rightWay1 == 1
{
   print("rightWay1 is True")
}
else
{
   print("rightWay1 is False")
}



***


No comments:

Post a Comment