diff --git a/book/content/dedication.asc b/book/content/dedication.asc index 2d833fbf..069d116c 100644 --- a/book/content/dedication.asc +++ b/book/content/dedication.asc @@ -1,4 +1,4 @@ [dedication] == Dedication -_To my wife Nathalie that supported me in my long hours of writing and my baby girl Abigail._ +_To my wife Nathalie who supported me in my long hours of writing and my baby girl Abigail._ diff --git a/book/content/part01/algorithms-analysis.asc b/book/content/part01/algorithms-analysis.asc index 29105859..52f57c03 100644 --- a/book/content/part01/algorithms-analysis.asc +++ b/book/content/part01/algorithms-analysis.asc @@ -5,7 +5,7 @@ endif::[] === Fundamentals of Algorithms Analysis -Probably you are reading this book because you want to write better and faster code. +You are probably reading this book because you want to write better and faster code. How can you do that? Can you time how long it takes to run a program? Of course, you can! [big]#⏱# However, if you run the same program on a smartwatch, cellphone or desktop computer, it will take different times. @@ -15,7 +15,7 @@ image::image3.png[image,width=528,height=137] Wouldn't it be great if we can compare algorithms regardless of the hardware where we run them? That's what *time complexity* is for! But, why stop with the running time? -We could also compare the memory "used" by different algorithms, and we called that *space complexity*. +We could also compare the memory "used" by different algorithms, and we call that *space complexity*. .In this chapter you will learn: - What’s the best way to measure the performance of your code regardless of what hardware you use. @@ -59,16 +59,16 @@ To give you a clearer picture of how different algorithms perform as the input s |============================================================================================= |Input size -> |10 |100 |10k |100k |1M |Finding if a number is odd |< 1 sec. |< 1 sec. |< 1 sec. |< 1 sec. |< 1 sec. -|Sorting elements in array with merge sort |< 1 sec. |< 1 sec. |< 1 sec. |few sec. |20 sec. -|Sorting elements in array with Bubble Sort |< 1 sec. |< 1 sec. |2 minutes |3 hours |12 days -|Finding all subsets of a given set |< 1 sec. |40,170 trillion years |> centillion years |∞ |∞ -|Find all permutations of a string |4 sec. |> vigintillion years |> centillion years |∞ |∞ +|Sorting array with merge sort |< 1 sec. |< 1 sec. |< 1 sec. |few sec. |20 sec. +|Sorting array with Selection Sort |< 1 sec. |< 1 sec. |2 minutes |3 hours |12 days +|Finding all subsets |< 1 sec. |40,170 trillion years |> centillion years |∞ |∞ +|Finding string permutations |4 sec. |> vigintillion years |> centillion years |∞ |∞ |============================================================================================= Most algorithms are affected by the size of the input (`n`). Let's say you need to arrange numbers in ascending order. Sorting ten items will naturally take less time than sorting out 2 million. But, how much longer? As the input size grow, some algorithms take proportionally more time, we classify them as <> runtime [or `O(n)`]. Others might take power two longer; we call them <> running time [or `O(n^2^)`]. From another perspective, if you keep the input size the same and run different algorithms implementations, you would notice the difference between an efficient algorithm and a slow one. For example, a good sorting algorithm is <>, and an inefficient algorithm for large inputs is <>. -Organizing 1 million elements with merge sort takes 20 seconds while bubble sort takes 12 days, ouch! +Organizing 1 million elements with merge sort takes 20 seconds while selection sort takes 12 days, ouch! The amazing thing is that both programs are solving the same problem with equal data and hardware; and yet, there's a big difference in time! After completing this book, you are going to _think algorithmically_. @@ -135,7 +135,7 @@ There’s a notation called *Big O*, where `O` refers to the *order of the funct TIP: Big O = Big Order of a function. -If you have a program which runtime is: +If you have a program that has a runtime of: _7n^3^ + 3n^2^ + 5_ @@ -144,7 +144,7 @@ You can express it in Big O notation as _O(n^3^)_. The other terms (_3n^2^ + 5_) Big O notation, only cares about the “biggest” terms in the time/space complexity. So, it combines what we learn about time and space complexity, asymptotic analysis and adds a worst-case scenario. .All algorithms have three scenarios: -* Best-case scenario: the most favorable input arrange where the program will take the least amount of operations to complete. E.g., array already sorted is beneficial for some sorting algorithms. +* Best-case scenario: the most favorable input arrangement where the program will take the least amount of operations to complete. E.g., an array that's already sorted is beneficial for some sorting algorithms. * Average-case scenario: this is the most common case. E.g., array items in random order for a sorting algorithm. * Worst-case scenario: the inputs are arranged in such a way that causes the program to take the longest to complete. E.g., array items in reversed order for some sorting algorithm will take the longest to run. @@ -154,7 +154,7 @@ TIP: Big O only cares about the highest order of the run time function and the w WARNING: Don't drop terms that are multiplying other terms. _O(n log n)_ is not equivalent to _O(n)_. However, _O(n + log n)_ is. -There are many common notations like polynomial, _O(n^2^)_ like we saw in the `getMin` example; constant _O(1)_ and many more that we are going to explore in the next chapter. +There are many common notations like polynomial, _O(n^2^)_ as we saw in the `getMin` example; constant _O(1)_ and many more that we are going to explore in the next chapter. Again, time complexity is not a direct measure of how long a program takes to execute, but rather how many operations it performs given the input size. Nevertheless, there’s a relationship between time complexity and clock time as we can see in the following table. (((Tables, Intro, Input size vs clock time by Big O))) diff --git a/book/content/part01/big-o-examples.asc b/book/content/part01/big-o-examples.asc index 55042627..3b56e1a0 100644 --- a/book/content/part01/big-o-examples.asc +++ b/book/content/part01/big-o-examples.asc @@ -7,7 +7,7 @@ endif::[] There are many kinds of algorithms. Most of them fall into one of the eight time complexities that we are going to explore in this chapter. -.Eight Running Time complexity You Should Know +.Eight Running Time Complexities You Should Know - Constant time: _O(1)_ - Logarithmic time: _O(log n)_ - Linear time: _O(n)_ @@ -17,7 +17,7 @@ There are many kinds of algorithms. Most of them fall into one of the eight time - Exponential time: _O(2^n^)_ - Factorial time: _O(n!)_ -We a going to provide examples for each one of them. +We are going to provide examples for each one of them. Before we dive in, here’s a plot with all of them. @@ -30,7 +30,7 @@ The above chart shows how the running time of an algorithm is related to the amo ==== Constant (((Constant))) (((Runtime, Constant))) -Represented as *O(1)*, it means that regardless of the input size the number of operations executed is always the same. Let’s see an example. +Represented as *O(1)*, it means that regardless of the input size, the number of operations executed is always the same. Let’s see an example: [#constant-example] ===== Finding if an array is empty @@ -47,7 +47,7 @@ include::{codedir}/runtimes/01-is-empty.js[tag=isEmpty] Another more real life example is adding an element to the begining of a <>. You can check out the implementation <>. -As you can see, in both examples (array and linked list) if the input is a collection of 10 elements or 10M it would take the same amount of time to execute. You can't get any more performant than this! +As you can see in both examples (array and linked list), if the input is a collection of 10 elements or 10M, it would take the same amount of time to execute. You can't get any more performant than this! [[logarithmic]] ==== Logarithmic @@ -68,7 +68,7 @@ The binary search only works for sorted lists. It starts searching for an elemen include::{codedir}/runtimes/02-binary-search.js[tag=binarySearchRecursive] ---- -This binary search implementation is a recursive algorithm, which means that the function `binarySearch` calls itself multiple times until the solution is found. The binary search splits the array in half every time. +This binary search implementation is a recursive algorithm, which means that the function `binarySearchRecursive` calls itself multiple times until the solution is found. The binary search splits the array in half every time. Finding the runtime of recursive algorithms is not very obvious sometimes. It requires some tools like recursion trees or the https://adrianmejia.com/blog/2018/04/24/analysis-of-recursive-algorithms/[Master Theorem]. The `binarySearch` divides the input in half each time. As a rule of thumb, when you have an algorithm that divides the data in half on each call you are most likely in front of a logarithmic runtime: _O(log n)_. @@ -92,8 +92,8 @@ include::{codedir}/runtimes/03-has-duplicates.js[tag=hasDuplicates] .`hasDuplicates` has multiple scenarios: * *Best-case scenario*: first two elements are duplicates. It only has to visit two elements. -* *Worst-case scenario*: no duplicated or duplicated are the last two. In either case, it has to visit every item on the array. -* *Average-case scenario*: duplicates are somewhere in the middle of the collection. Only, half of the array will be visited. +* *Worst-case scenario*: no duplicates or duplicates are the last two. In either case, it has to visit every item in the array. +* *Average-case scenario*: duplicates are somewhere in the middle of the collection. Only half of the array will be visited. As we learned before, the big O cares about the worst-case scenario, where we would have to visit every element on the array. So, we have an *O(n)* runtime. @@ -147,11 +147,11 @@ Usually they have double-nested loops, where each one visits all or most element [[quadratic-example]] ===== Finding duplicates in an array (naïve approach) -If you remember we have solved this problem more efficiently on the <> section. We solved this problem before using an _O(n)_, let’s solve it this time with an _O(n^2^)_: +If you remember, we have solved this problem more efficiently in the <> section. We solved this problem before using an _O(n)_, let’s solve it this time with an _O(n^2^)_: // image:image12.png[image,width=527,height=389] -.Naïve implementation of has duplicates function +.Naïve implementation of hasDuplicates function [source, javascript] ---- include::{codedir}/runtimes/05-has-duplicates-naive.js[tag=hasDuplicates] @@ -159,7 +159,7 @@ include::{codedir}/runtimes/05-has-duplicates-naive.js[tag=hasDuplicates] As you can see, we have two nested loops causing the running time to be quadratic. How much difference is there between a linear vs. quadratic algorithm? -Let’s say you want to find a duplicated middle name in a phone directory book of a city of ~1 million people. If you use this quadratic solution you would have to wait for ~12 days to get an answer [big]#🐢#; while if you use the <> you will get the answer in seconds! [big]#🚀# +Let’s say you want to find a duplicated middle name in a phone directory book of a city of ~1 million people. If you use this quadratic solution, you would have to wait for ~12 days to get an answer [big]#🐢#; while if you use the <>, you will get the answer in seconds! [big]#🚀# [[cubic]] ==== Cubic @@ -186,7 +186,7 @@ include::{codedir}/runtimes/06-multi-variable-equation-solver.js[tag=findXYZ] WARNING: This is just an example, there are better ways to solve multi-variable equations. -As you can see three nested loops usually translates to O(n^3^). If you have a four variable equation and four nested loops it would be O(n^4^) and so on when we have a runtime in the form of _O(n^c^)_, where _c > 1_, we refer to this as a *polynomial runtime*. +As you can see three nested loops usually translates to O(n^3^). If you have a four variable equation and four nested loops it would be O(n^4^) and so on. When we have a runtime in the form of _O(n^c^)_, where _c > 1_, we refer to this as a *polynomial runtime*. [[exponential]] ==== Exponential diff --git a/book/content/part02/array-vs-list-vs-queue-vs-stack.asc b/book/content/part02/array-vs-list-vs-queue-vs-stack.asc index 6d7439e7..bc289ed8 100644 --- a/book/content/part02/array-vs-list-vs-queue-vs-stack.asc +++ b/book/content/part02/array-vs-list-vs-queue-vs-stack.asc @@ -17,7 +17,7 @@ In this part of the book, we explored the most used linear data structures such * You want constant time to remove/add from extremes of the list. .Use a Queue when: -* You need to access your data in a first-come, first served basis (FIFO). +* You need to access your data on a first-come, first served basis (FIFO). * You need to implement a <> .Use a Stack when: diff --git a/book/content/part02/array.asc b/book/content/part02/array.asc index c2ef97aa..6c95d376 100644 --- a/book/content/part02/array.asc +++ b/book/content/part02/array.asc @@ -17,7 +17,7 @@ TIP: Strings are a collection of Unicode characters and most of the array concep .Fixed vs. Dynamic Size Arrays **** -Some programming languages have fixed size arrays like Java and C++. Fixed size arrays might be a hassle when your collection gets full, and you have to create a new one with a bigger size. For that, those programming languages also have built-in dynamic arrays: we have `vector` in C++ and `ArrayList` in Java. Dynamic programming languages like JavaScript, Ruby, Python use dynamic arrays by default. +Some programming languages have fixed size arrays like Java and C++. Fixed size arrays might be a hassle when your collection gets full, and you have to create a new one with a bigger size. For that, those programming languages also have built-in dynamic arrays: we have `vector` in C++ and `ArrayList` in Java. Dynamic programming languages like JavaScript, Ruby, and Python use dynamic arrays by default. **** Arrays look like this: @@ -29,7 +29,7 @@ Arrays are a sequential collection of elements that can be accessed randomly usi ==== Insertion -Arrays are built-in into most languages. Inserting an element is simple; you can either add them on creation time or after initialization. Below you can find an example for both cases: +Arrays are built-in into most languages. Inserting an element is simple; you can either add them at creation time or after initialization. Below you can find an example for both cases: .Inserting elements into an array [source, javascript] @@ -44,7 +44,7 @@ array2[100] = 2; array2 // [empty × 3, 1, empty × 96, 2] ---- -Using the index, you can replace whatever value you want. Also, you don't have to add items next to each other. The size of the array will dynamically expand to accommodate the data. You can reference values in whatever index you like index 3 or even 100! In the `array2` we inserted 2 numbers, but the length is 101, and there are 99 empty spaces. +Using the index, you can replace whatever value you want. Also, you don't have to add items next to each other. The size of the array will dynamically expand to accommodate the data. You can reference values at whatever index you like: index 3 or even 100! In `array2`, we inserted 2 numbers but the length is 101 and there are 99 empty spaces. [source, javascript] ---- @@ -87,7 +87,7 @@ const array = [2, 5, 1, 9, 6, 7]; array.splice(1, 0, 111); // ↪️ [] <1> // array: [2, 111, 5, 1, 9, 6, 7] ---- -<1> at the position `1`, delete `0` elements and insert `111`. +<1> at position `1`, delete `0` elements and insert `111`. The Big O for this operation would be *O(n)* since in worst case it would move most of the elements to the right. @@ -132,7 +132,7 @@ const array = [2, 5, 1, 9, 6, 7]; array[4]; // ↪️ 6 ---- -Searching by index takes constant time, *O(1)*, to retrieve values out of the array. If we want to get fancier we can create a function: +Searching by index takes constant time - *O(1)* - to retrieve values out of the array. If we want to get fancier, we can create a function: // image:image17.png[image,width=528,height=293] @@ -184,7 +184,7 @@ We would have to loop through the whole array (worst case) or until we find it: ==== Deletion -Deleting (similar to insertion) there are three possible scenarios, removing at the beginning, middle or end. +There are three possible scenarios for deletion (similar to insertion): removing at the beginning, middle or end. ===== Deleting element from the beginning @@ -223,7 +223,7 @@ array.splice(2, 1); // ↪️[2] <1> ---- <1> delete 1 element at position 2 -Deleting from the middle might cause most the elements of the array to move back one position to fill in for the eliminated item. Thus, runtime: O(n). +Deleting from the middle might cause most of the elements of the array to move up one position to fill in for the eliminated item. Thus, runtime: O(n). ===== Deleting element from the end @@ -237,7 +237,7 @@ array.pop(); // ↪️111 // array: [2, 5, 1, 9] ---- -No element other element has been shifted, so it’s an _O(1)_ runtime. +No other element has been shifted, so it’s an _O(1)_ runtime. .JavaScript built-in `array.pop` **** @@ -264,7 +264,7 @@ To sum up, the time complexity of an array is: (((Runtime, Constant))) (((Tables, Linear DS, JavaScript Array buit-in operations Complexities))) -.Array Operations timex complexity +.Array Operations time complexity |=== | Operation | Time Complexity | Usage | push ^| O(1) | Insert element to the right side. diff --git a/book/content/part02/linked-list.asc b/book/content/part02/linked-list.asc index d05ed265..263caef3 100644 --- a/book/content/part02/linked-list.asc +++ b/book/content/part02/linked-list.asc @@ -12,18 +12,18 @@ A list (or Linked List) is a linear data structure where each node is "linked" t .Linked Lists can be: - Singly: every item has a pointer to the next node -- Doubly: every node has a reference to the next and previous object +- Doubly: every node has a reference to the next and previous node - Circular: the last element points to the first one. [[singly-linked-list]] ==== Singly Linked List -Each element or node is *connected* to the next one by a reference. When a node only has one connection it's called *singly linked list*: +Each element or node is *connected* to the next one by a reference. When a node only has one connection, it's called a *singly linked list*: .Singly Linked List Representation: each node has a reference (blue arrow) to the next one. image::image19.png[image,width=498,height=97] -Usually, a Linked List is referenced by the first element in called *head* (or *root* node). For instance, if you want to get the `cat` element from the example above, then the only way to get there is using the `next` field on the head node. You would get `art` first, then use the next field recursively until you eventually get the `cat` element. +Usually, a Linked List is referenced by the first element called *head* (or *root* node). For instance, if you want to get the `cat` element from the example above, then the only way to get there is using the `next` field on the head node. You would get `art` first, then use the next field recursively until you eventually get the `cat` element. [[doubly-linked-list]] ==== Doubly Linked List @@ -33,7 +33,7 @@ When each node has a connection to the `next` item and also the `previous` one, .Doubly Linked List: each node has a reference to the next and previous element. image::image20.png[image,width=528,height=74] -With a doubly list you can not only move forward but also backward. If you keep the reference to the last element (`cat`) you can step back and reach the middle part. +With a doubly list, you can not only move forward but also backward. If you keep the reference to the last element (`cat`) you can step back and reach the middle part. If we implement the code for the `Node` elements, it would be something like this: @@ -47,13 +47,13 @@ include::{codedir}/data-structures/linked-lists/node.js[tag=snippet] ==== Linked List vs. Array -Arrays allow you to access data anywhere in the collection using an index. However, Linked List visits nodes in sequential order. In the worst case scenario, it takes _O(n)_ to get an element from a Linked List. You might be wondering: Isn’t always an array more efficient with _O(1)_ access time? It depends. +Arrays allow you to access data anywhere in the collection using an index. However, Linked List visits nodes in sequential order. In the worst case scenario, it takes _O(n)_ to get an element from a Linked List. You might be wondering: Isn’t an array always more efficient with _O(1)_ access time? It depends. -We also have to understand the space complexity to see the trade-offs between arrays and linked lists. An array pre-allocates contiguous blocks of memory. When it is getting full, it has to create a bigger array (usually 2x) and copy all the elements. It takes _O(n)_ to copy all the items over. On the other hand, LinkedList’s nodes only reserve precisely the amount of memory it needs. They don’t have to be next to each other, nor large chunks of memory have to be booked beforehand like arrays. Linked List is more on a "grow as you go" basis. +We also have to understand the space complexity to see the trade-offs between arrays and linked lists. An array pre-allocates contiguous blocks of memory. When it is getting full, it has to create a bigger array (usually 2x) and copy all the elements. It takes _O(n)_ to copy all the items over. On the other hand, LinkedList’s nodes only reserve precisely the amount of memory they need. They don’t have to be next to each other, nor large chunks of memory have to be booked beforehand like arrays. Linked List is more on a "grow as you go" basis. Another difference is that adding/deleting at the beginning on an array takes O(n); however, the linked list is a constant operation O(1) as we will implement later. -A drawback of a linked list is that if you want to insert/delete an element at the end of the list, you would have to navigate the whole collection to find the last one O(n). However, this can be solved by keeping track of the last element in the list. We are going to implement that! +A drawback of a linked list is that if you want to insert/delete an element at the end of the list, you would have to navigate the whole collection to find the last one: O(n). However, this can be solved by keeping track of the last element in the list. We are going to implement that! ==== Implementing a Linked List @@ -74,7 +74,7 @@ In our constructor, we keep a reference of the `first` and also `last` node for ==== Searching by value -Finding an element by value there’s no other way than iterating through the whole list. +There’s no other way to find an element by value than iterating through the entire list. .Linked List's searching by values [source, javascript] @@ -109,7 +109,7 @@ Searching by index is very similar, we iterate through the list until we find th include::{codedir}/data-structures/linked-lists/linked-list.js[tag=searchByIndex, indent=0] ---- -If there’s no match, we return `undefined` then. The runtime is _O(n)_. As you might notice the search by index and by position methods looks pretty similar. If you want to take a look at the whole implementation https://github.com/amejiarosario/dsa.js/blob/7694c20d13f6c53457ee24fbdfd3c0ac57139ff4/src/data-structures/linked-lists/linked-list.js#L8[click here]. +If there’s no match, we return `undefined` then. The runtime is _O(n)_. As you might notice, the search by index and by position methods looks pretty similar. If you want to take a look at the whole implementation, https://github.com/amejiarosario/dsa.js/blob/7694c20d13f6c53457ee24fbdfd3c0ac57139ff4/src/data-structures/linked-lists/linked-list.js#L8[click here]. ==== Insertion @@ -162,7 +162,7 @@ For inserting an element at the middle of the list, you would need to specify th . New node's next `previous`. -Let’s do an example, with the following doubly linked list: +Let’s do an example with the following doubly linked list: ---- art <-> dog <-> cat @@ -181,14 +181,14 @@ Take a look into the implementation of https://github.com/amejiarosario/dsa.js/b include::{codedir}/data-structures/linked-lists/linked-list.js[tag=addMiddle, indent=0] ---- <1> If the new item goes to position 0, then we reuse the `addFirst` method, and we are done! -<2> However, If we are adding to the last position, then we reuse the `addLast` method, and done! +<2> However, if we are adding to the last position, then we reuse the `addLast` method, and done! <3> Adding `newNode` to the middle: First, create the `new` node only if the position exists. Take a look at <> to see `get` implementation. <4> Set newNode `previous` reference. <5> Set newNode `next` link. <6> No other node in the list is pointing to `newNode`, so we have to make the prior element point to `newNode`. <7> Make the next element point to `newNode`. -Take notice that we reused, `addFirst` and `addLast` methods. For all the other cases the insertion is in the middle. We use `current.previous.next` and `current.next` to update the surrounding elements and make them point to the new node. Inserting on the middle takes *O(n)* because we have to iterate through the list using the `get` method. +Take notice that we reused `addFirst` and `addLast` methods. For all the other cases, the insertion is in the middle. We use `current.previous.next` and `current.next` to update the surrounding elements and make them point to the new node. Inserting in the middle takes *O(n)* because we have to iterate through the list using the `get` method. ==== Deletion @@ -201,7 +201,7 @@ Deleting the first element (or head) is a matter of removing all references to i .Deleting an element from the head of the list image::image26.png[image,width=528,height=74] -For instance, to remove the head (“art”) node, we change the variable `first` to point to the second node “dog”. We also remove the variable `previous` from the "dog" node, so it doesn't point to the “art” node. The garbage collector will get rid of the “art” node when it seems nothing is using it anymore. +For instance, to remove the head (“art”) node, we change the variable `first` to point to the second node “dog”. We also remove the variable `previous` from the "dog" node, so it doesn't point to the “art” node. The garbage collector will get rid of the “art” node when it sees nothing is using it anymore. .Linked List's remove from the beginning of the list [source, javascript] @@ -209,17 +209,17 @@ For instance, to remove the head (“art”) node, we change the variable `first include::{codedir}/data-structures/linked-lists/linked-list.js[tag=removeFirst, indent=0] ---- -As you can see, when we want to remove the first node we make the 2nd element the first one. +As you can see, when we want to remove the first node, we make the 2nd element the first one. ===== Deleting element from the tail -Removing the last element from the list would require to iterate from the head until we find the last one, that’s O(n). But, If we have a reference to the last element, which we do, We can do it in _O(1)_ instead! +Removing the last element from the list would require to iterate from the head until we find the last one, that’s O(n). But, if we have a reference to the last element, which we do, we can do it in _O(1)_ instead! .Removing last element from the list using the last reference. image::image27.png[image,width=528,height=221] -For instance, if we want to remove the last node “cat”. We use the last pointer to avoid iterating through the whole list. We check `last.previous` to get the “dog” node and make it the new `last` and remove its next reference to “cat”. Since nothing is pointing to “cat” then is out of the list and eventually is deleted from memory by the garbage collector. +For instance, if we want to remove the last node “cat”. We use the last pointer to avoid iterating through the whole list. We check `last.previous` to get the “dog” node and make it the new `last` and remove its next reference to “cat”. Since nothing is pointing to “cat”, it is out of the list and eventually is deleted from memory by the garbage collector. .Linked List's remove from the end of the list [source, javascript] @@ -238,7 +238,7 @@ To remove a node from the middle, we make the surrounding nodes to bypass the on image::image28.png[image,width=528,height=259] -In the illustration, we are removing the middle node “dog” by making art’s `next` variable to point to cat and cat’s `previous` to be “art” totally bypassing “dog”. +In the illustration, we are removing the middle node “dog” by making art’s `next` variable to point to cat and cat’s `previous` to be “art”, totally bypassing “dog”. Let’s implement it: @@ -261,14 +261,14 @@ So far, we have seen two liner data structures with different use cases. Here’ .2+.^s| Data Structure 2+^s| Searching By 3+^s| Inserting at the 3+^s| Deleting from .2+.^s| Space ^|_Index/Key_ ^|_Value_ ^|_beginning_ ^|_middle_ ^|_end_ ^|_beginning_ ^|_middle_ ^|_end_ | Array ^|O(1) ^|O(n) ^|O(n) ^|O(n) ^|O(1) ^|O(n) ^|O(n) ^|O(1) ^|O(n) -| Linked List (singly) ^|O(n) ^|O(n) ^|O(1) ^|O(n) ^|O(1) ^|O(1) ^|O(n) ^|*O(n)* ^|O(n) +| Linked List (singly) ^|O(n) ^|O(n) ^|O(1) ^|O(n) ^|O(n) ^|O(1) ^|O(n) ^|*O(n)* ^|O(n) | Linked List (doubly) ^|O(n) ^|O(n) ^|O(1) ^|O(n) ^|O(1) ^|O(1) ^|O(n) ^|*O(1)* ^|O(n) |=== // end::table[] (((Linear))) (((Runtime, Linear))) -If you compare the singly linked list vs. doubly linked list, you will notice that the main difference is deleting elements from the end. For a singly list is *O(n)*, while for a doubly list is *O(1)*. +If you compare the singly linked list vs. doubly linked list, you will notice that the main difference is inserting elements to and deleting elements from the end. For a singly linked list, it's *O(n)*, while a doubly linked list is *O(1)*. Comparing an array with a doubly linked list, both have different use cases: @@ -284,4 +284,4 @@ Use a doubly linked list when: * You want to insert elements at the start and end of the list. The linked list has O(1) while array has O(n). * You want to save some memory when dealing with possibly large data sets. Arrays pre-allocate a large chunk of contiguous memory on initialization. Lists are more “grow as you go”. -For the next two linear data structures <> and <>, we are going to use a doubly linked list to implement them. We could use an array as well, but since inserting/deleting from the start perform better on linked-list, we are going use that. +For the next two linear data structures <> and <>, we are going to use a doubly linked list to implement them. We could use an array as well, but since inserting/deleting from the start performs better with linked-lists, we are going use that. diff --git a/book/content/part02/queue.asc b/book/content/part02/queue.asc index 62d50ff2..aab404a7 100644 --- a/book/content/part02/queue.asc +++ b/book/content/part02/queue.asc @@ -24,7 +24,7 @@ We could use an array or a linked list to implement a Queue. However, it is reco [source, javascript] ---- include::{codedir}/data-structures/queues/queue.js[tag=constructor] - // ... methods goes here ... + // ... methods go here ... } ---- @@ -32,7 +32,7 @@ We initialize the Queue creating a linked list. Now, let’s add the `enqueue` a ==== Insertion (((Enqueue))) -For inserting elements on queue, also know as *enqueue*, we add items to the back of the list using `addLast`: +For inserting elements into a queue, also know as *enqueue*, we add items to the back of the list using `addLast`: .Queue's enqueue [source, javascript] @@ -44,7 +44,7 @@ As discussed, this operation has a constant runtime. ==== Deletion (((Dequeue))) -For removing elements from a queue, also know as *dequeue*, we remove elements from the front of the list using `removeFirst`: +For removing elements from a queue, also known as *dequeue*, we remove elements from the front of the list using `removeFirst`: .Queue's dequeue [source, javascript] @@ -64,7 +64,7 @@ We can use our Queue class like follows: include::{codedir}/data-structures/queues/queue.js[tag=snippet, indent=0] ---- -You can see that the items are dequeue in the same order they were added, FIFO (first-in, first out). +You can see that the items are dequeued in the same order they were added, FIFO (first-in, first out). ==== Queue Complexity diff --git a/book/content/part02/stack.asc b/book/content/part02/stack.asc index 09b8a741..81ced6f2 100644 --- a/book/content/part02/stack.asc +++ b/book/content/part02/stack.asc @@ -11,16 +11,16 @@ endif::[] (((LIFO))) The stack is a data structure that restricts the way you add and remove data. It only allows you to insert and retrieve in a *Last-In-First-Out* (LIFO) fashion. -An analogy is to think the stack is a rod and the data are discs. You can only take out the last one you put in. +An analogy is to think that the stack is a rod and the data are discs. You can only take out the last one you put in. .Stack data structure is like a stack of disks: the last element in is the first element out image::image29.png[image,width=240,height=238] // #Change image from https://www.khanacademy.org/computing/computer-science/algorithms/towers-of-hanoi/a/towers-of-hanoi[Khan Academy]# -As you can see in the image above, If you insert the disks in the order `5`, `4`, `3`, `2`, `1`. Then you can remove them on `1`, `2`, `3`, `4`, `5`. +As you can see in the image above, If you insert the disks in the order `5`, `4`, `3`, `2`, `1`, then you can remove them in `1`, `2`, `3`, `4`, `5`. -The stack inserts items to the end of the collection and also removes from the end. Both, an array and linked list would do it in constant time. However, since we don’t need the Array’s random access, a linked list makes more sense. +The stack inserts items to the end of the collection and also removes from the end. Both an array and linked list would do it in constant time. However, since we don’t need the Array’s random access, a linked list makes more sense. .Stack's constructor [source, javascript] @@ -84,4 +84,4 @@ Implementing the stack with an array and linked list would lead to the same time |=== // end::table[] -It's not very common to search for values on a stack (other Data Structures are better suited for this). Stacks especially useful for implementing <>. +It's not very common to search for values on a stack (other Data Structures are better suited for this). Stacks are especially useful for implementing <>. diff --git a/book/content/preface.asc b/book/content/preface.asc index 7e03dcd6..bb780633 100644 --- a/book/content/preface.asc +++ b/book/content/preface.asc @@ -3,15 +3,15 @@ === What is in this book? -_{doctitle}_ is a book that can be read from cover to cover, where each section builds on top of the previous one. Also, it can be used as a reference manual where developers can refresh specific topics before an interview or looking for ideas to solve a problem optimally. (Check out the <> and <>) +_{doctitle}_ is a book that can be read from cover to cover, where each section builds on top of the previous one. Also, it can be used as a reference manual where developers can refresh specific topics before an interview or look for ideas to solve a problem optimally. (Check out the <> and <>) -This publication is designed to be concise, intending to serve software developers looking to get a firm conceptual understanding of data structures in a quick yet in-depth fashion. After reading this book, the reader should have a fundamental knowledge of algorithms, including when and where to apply it, what are the trade-offs of using one data structure over the other. The reader will then be able to make intelligent decisions about algorithms and data structures in their projects require. +This publication is designed to be concise, intending to serve software developers looking to get a firm conceptual understanding of data structures in a quick yet in-depth fashion. After reading this book, the reader should have a fundamental knowledge of algorithms, including when and where to apply it, what are the trade-offs of using one data structure over the other. The reader will then be able to make intelligent decisions about algorithms and data structures in their projects. === Who this book is for This book is for software developers familiar with JavaScript looking to improve their problem-solving skills or preparing for a job interview. -NOTE: You can apply the concepts in this book to any programming language. However, instead of doing examples in pseudo-code we are going to use JavaScript to implement the code examples. +NOTE: You can apply the concepts in this book to any programming language. However, instead of doing examples in pseudo-code, we are going to use JavaScript to implement the code examples. === What you need for this book diff --git a/book/part02-linear-data-structures.asc b/book/part02-linear-data-structures.asc index ad0db79e..ca76e78a 100644 --- a/book/part02-linear-data-structures.asc +++ b/book/part02-linear-data-structures.asc @@ -3,7 +3,7 @@ Data Structures comes in many flavors. There’s no one to rule them all. You have to know the tradeoffs so you can choose the right one for the job. -Even though in your day-to-day, you might not need to re-implementing them, knowing how they work internally would help you how when to use over the other or even tweak them to create a new one. We are going to explore the most common data structures time and space complexity. +Even though in your day-to-day, you might not need to re-implementing them, knowing how they work internally would help you know when to use one over the other or even tweak them to create a new one. We are going to explore the most common data structures' time and space complexity. .In this part we are going to learn about the following linear data structures: - <> diff --git a/package.json b/package.json index c1fc41e0..2297df68 100644 --- a/package.json +++ b/package.json @@ -1,6 +1,6 @@ { "name": "dsa.js", - "version": "1.3.9", + "version": "1.3.10", "description": "Data Structures & Algorithms in JS", "author": "Adrian Mejia (https://adrianmejia.com)", "homepage": "https://github.com/amejiarosario/dsa.js", diff --git a/src/data-structures/queues/queue.js b/src/data-structures/queues/queue.js index 2ef458a8..9b4e4bd9 100644 --- a/src/data-structures/queues/queue.js +++ b/src/data-structures/queues/queue.js @@ -2,7 +2,7 @@ const LinkedList = require('../linked-lists/linked-list'); // tag::constructor[] /** - * Data structure where add and remove elements in a first-in, first-out (FIFO) + * Data structure where we add and remove elements in a first-in, first-out (FIFO) fashion */ class Queue { constructor() {