Available Balance
Binary Search Tree – Insertion, Introduction

Binary search trees (BSTs) are very simple to understand. We start with a root node with value x, where the left subtree of x contains nodes with values < x
and the right subtree contains nodes whose values are ¸ x. Each node follows the same rules with respect to nodes in their left and right subtrees. BSTs are of interest because they have operations which are favourably fast: insertion, look up, and deletion can all be done in O(log n) time. It is important to note that the O(log n) times for these operations can only be attained if the BST is reasonably balanced; for a tree data structure with self balancing properties see AVL tree de¯ned in x7).

In the following examples you can assume, unless used as a parameter alias that root is a reference to the root node of the tree.

Insertion

As mentioned previously insertion is an O(log n) operation provided that the tree is moderately balanced.
1) algorithm Insert(value)
2) Pre: value has passed custom type checks for type T
3) Post: value has been placed in the correct location in the tree
4) if root = ; 5) root à node(value)
6) else
7) InsertNode(root, value)
8) end if
9) end Insert

1) algorithm InsertNode(current, value)
2) Pre: current is the node to start from
3) Post: value has been placed in the correct location in the tree
4) if value < current.Value
5) if current.Left = ; 6) current.Left à node(value)
7) else
8) InsertNode(current.Left, value)
9) end if
10) else
11) if current.Right = ; 12) current.Right à node(value)
13) else
14) InsertNode(current.Right, value)
15) end if
16) end if
17) end InsertNode

The insertion algorithm is split for a good reason. The ¯rst algorithm (non- recursive) checks a very core base case – whether or not the tree is empty. If
the tree is empty then we simply create our root node and ¯nish. In all other cases we invoke the recursive InsertNode algorithm which simply guides us to
the ¯rst appropriate place in the tree to put value. Note that at each stage we perform a binary chop: we either choose to recurse into the left subtree or the
right by comparing the new value with that of the current node. For any totally ordered type, no value can simultaneously satisfy the conditions to place it in
both subtrees.

Rate This Content
Linked Lists – Deletion, Reverse Traversal

As you may of guessed the cases that we use for deletion in a doubly linked list are exactly the same as those de¯ned in x2.1.3. Like insertion we have the added task of binding an additional reference (Previous) to the correct value.

1) algorithm Remove(head, value)
2) Pre: head is the head node in the list
3) value is the value to remove from the list
4) Post: value is removed from the list, true; otherwise false
5) if head = ; 6) return false
7) end if
8) if value = head.Value
9) if head = tail
10) head à ; 11) tail à ; 12) else
13) head à head.Next
14) head.Previous à ; 15) end if
16) return true
17) end if
18) n à head.Next
19) while n 6= ; and value 6= n.Value
20) n à n.Next
21) end while
22) if n = tail
23) tail à tail.Previous
24) tail.Next à ; 25) return true
26) else if n 6= ; 27) n.Previous.Next à n.Next
28) n.Next.Previous à n.Previous
29) return true
30) end if
31) return false
32) end Remove

Singly linked lists have a forward only design, which is why the reverse traversal algorithm de¯ned in x2.1.5 required some creative invention. Doubly linked lists make reverse traversal as simple as forward traversal (de¯ned in x2.1.4) except that we start at the tail node and update the pointers in the opposite direction. Figure 2.6 shows the reverse traversal algorithm in action.

1) algorithm ReverseTraversal(tail)
2) Pre: tail is the tail node of the list to traverse
3) Post: the list has been traversed in reverse order
4) n à tail
5) while n 6= ; 6) yield n.Value
7) n à n.Previous
8) end while
9) end ReverseTraversal

Linked lists are good to use when you have an unknown number of items to store. Using a data structure like an array would require you to specify the size
up front; exceeding that size involves invoking a resizing algorithm which has a linear run time. You should also use linked lists when you will only remove
nodes at either the head or tail of the list to maintain a constant run time. This requires maintaining pointers to the nodes at the head and tail of the list
but the memory overhead will pay for itself if this is an operation you will be performing many times. What linked lists are not very good for is random insertion, accessing nodes by index, and searching. At the expense of a little memory (in most cases 4 bytes would su±ce), and a few more read/writes you could maintain a count variable that tracks how many items are contained in the list so that accessing such a primitive property is a constant operation – you just need to update count during the insertion and deletion algorithms. Singly linked lists should be used when you are only performing basic in- sertions. In general doubly linked lists are more accommodating for non-trivial operations on a linked list. We recommend the use of a doubly linked list when you require forwards and backwards traversal. For the most cases this requirement is present. For example, consider a token stream that you want to parse in a recursive descent fashion. Sometimes you will have to backtrack in order to create the correct parse tree. In this scenario a doubly linked list is best as its design makes
bi-directional traversal much simpler and quicker than that of a singly linked

Rate This Content
Linked Lists – Data Structures and Algorithms

Linked lists can be thought of from a high level perspective as being a series
of nodes. Each node has at least a single pointer to the next node, and in the
last node’s case a null pointer representing that there are no more nodes in the
linked list.
In DSA our implementations of linked lists always maintain head and tail
pointers so that insertion at either the head or tail of the list is a constant
time operation. Random insertion is excluded from this and will be a linear
operation. As such, linked lists in DSA have the following characteristics:

Insertion is O(1)
2. Deletion is O(n)
3. Searching is O(n)

Out of the three operations the one that stands out is that of insertion. In
DSA we chose to always maintain pointers (or more aptly references) to the
node(s) at the head and tail of the linked list and so performing a traditional
insertion to either the front or back of the linked list is an O(1) operation. An
exception to this rule is performing an insertion before a node that is neither
the head nor tail in a singly linked list. When the node we are inserting before
is somewhere in the middle of the linked list (known as random insertion) the
complexity is O(n). In order to add before the designated node we need to
traverse the linked list to ¯nd that node’s current predecessor. This traversal
yields an O(n) run time.

This data structure is trivial, but linked lists have a few key points which at
times make them very attractive:
1. the list is dynamically resized, thus it incurs no copy penalty like an array
or vector would eventually incur; and
2. insertion is O(1).

In the next study w Singly Linked List

so let’s chek out my post freinds we will explain with diagram for DSA

 

for motivational visit my post
Thinking, Fast and Slow – The Best selling book published in 2011

Thinking, Fast and Slow – The Best selling book published in 2011

Top 10 Quotes for Thinking, Fast & Slow – I promises Never Forget

Top 10 Quotes for Thinking, Fast & Slow – I promises Never Forget

Rate This Content
Introduction of Data Structures and Algorithms

Testing

All the data structures and algorithms have been tested using a minimised test
driven development style on paper to °esh out the pseudocode algorithm. We
then transcribe these tests into unit tests satisfying them one by one. When
all the test cases have been progressively satis¯ed we consider that algorithm
suitably tested.
For the most part algorithms have fairly obvious cases which need to be
satis¯ed. Some however have many areas which can prove to be more complex
to satisfy. With such algorithms we will point out the test cases which are tricky
and the corresponding portions of pseudocode within the algorithm that satisfy
that respective case.
As you become more familiar with the actual problem you will be able to
intuitively identify areas which may cause problems for your algorithms imple-
mentation. This in some cases will yield an overwhelming list of concerns which
will hinder your ability to design an algorithm greatly. When you are bom-
barded with such a vast amount of concerns look at the overall problem again
and sub-divide the problem into smaller problems. Solving the smaller problems
and then composing them is a far easier task than clouding your mind with too
many little details.
The only type of testing that we use in the implementation of all that is
provided in this book are unit tests. Because unit tests contribute such a core
piece of creating somewhat more stable software we invite the reader to view
Appendix D which describes testing in more depth.

In This topic we study about some New topic for DSA

1st we study Data Structures and Algorithms – Big Oh notation

For run time complexity analysis we use big Oh notation extensively so it is vital that you are familiar with the general concepts to determine which is the best algorithm for you in certain scenarios. We have chosen to use big Oh notation for a few reasons, the most important of which is that it provides an abstract measurement by which we can judge the performance of algorithms without using mathematical proofs.

2nd we study Data Structures and Algorithms – Imperative programming language

In The last lecture we study about – Big Oh notation
Figure 1.1 shows some of the run times to demonstrate how important it is to choose an efficient algorithm. For the sanity of our graph we have omitted cubic O(n 3 ), and exponential O(2n) run times. Cubic and exponential algorithms should only ever be used for very small problems (if ever!); avoid them if feasibly possible.
The following list explains some of the most common big Oh notations:
O(1) constant: the operation doesn’t depend on the size of its input, e.g. adding a node to the tail of a linked list where we always maintain a pointer to the tail node.
O(n) linear: the run time complexity is proportionate to the size of n.

 

3rd we study about Introduction of Data Structures and Algorithms – Object oriented concepts, Pseudo code

Object oriented concepts
For the most part this book does not use features that are speci¯c to any one
language. In particular, we never provide data structures or algorithms that
work on generic types|this is in order to make the samples as easy to follow
as possible. However, to appreciate the designs of our data structures you will
need to be familiar with the following object oriented (OO) concepts

And we will study in the next topic for DSA
Linked Lists

Rate This Content
Introduction of Data Structures and Algorithms
November 11, 2017
0

Tips for working through the examples

As with most books you get out what you put in and so we recommend that in
order to get the most out of this book you work through each algorithm with a
pen and paper to track things like variable names, recursive calls etc.
The best way to work through algorithms is to set up a table, and in that
table give each variable its own column and continuously update these columns.
This will help you keep track of and visualise the mutations that are occurring
throughout the algorithm. Often while working through algorithms in such
a way you can intuitively map relationships between data structures rather
than trying to work out a few values on paper and the rest in your head. We
suggest you put everything on paper irrespective of how trivial some variables
and calculations may be so that you always have a point of reference.
When dealing with recursive algorithm traces we recommend you do the
same as the above, but also have a table that records function calls and who
they return to. This approach is a far cleaner way than drawing out an elaborate
map of function calls with arrows to one another, which gets large quickly and
simply makes things more complex to follow. Track everything in a simple and
systematic way to make your time studying the implementations far easier.

The Outline

We have split two part of topic
Part 1: Provides discussion and pseudo-implementations of common and uncommon data structures; and
Part 2: Provides algorithms of varying purposes from sorting to string operations.
The reader doesn’t have to read the book sequentially from beginning to end: chapters can be read independently from one another. We suggest that in part 1 you read each chapter in its entirety, but in part 2 you can get away with just reading the section of a chapter that describes the algorithm you are
interested in.
Each of the chapters on data structures present initially the algorithms concerned with:

The Outline
We have split two part of topic
Part 1: Provides discussion and pseudo-implementations of common and uncom-
mon data structures; and
Part 2: Provides algorithms of varying purposes from sorting to string operations.

The reader doesn’t have to read the book sequentially from beginning to end: chapters can be read independently from one another. We suggest that in part 1 you read each chapter in its entirety, but in part 2 you can get away with just reading the section of a chapter that describes the algorithm you are
interested in.
Each of the chapters on data structures present initially the algorithms con-
cerned with:

Rate This Content
Introduction of Data Structures and Algorithms – Object oriented concepts, Pseudo code

Object oriented concepts

For the most part this book does not use features that are speci¯c to any one
language. In particular, we never provide data structures or algorithms that
work on generic types|this is in order to make the samples as easy to follow
as possible. However, to appreciate the designs of our data structures you will
need to be familiar with the following object oriented (OO) concepts:

1. Inheritance
2. Encapsulation
3. Polymorphism

This is especially important if you are planning on looking at the C# target
that we have implemented (more on that in x1.7) which makes extensive use
of the OO concepts listed above. As a ¯nal note it is also desirable that the
reader is familiar with interfaces as the C# target uses interfaces throughout
the sorting algorithms.

Pseudocode

Throughout this book we use pseudocode to describe our solutions. For the
most part interpreting the pseudocode is trivial as it looks very much like a
more abstract C++, or C#, but there are a few things to point out:

1. Pre-conditions should always be enforced
2. Post-conditions represent the result of applying algorithm a to data struc-
ture d
3. The type of parameters is inferred
4. All primitive language constructs are explicitly begun and ended

If an algorithm has a return type it will often be presented in the post-
condition, but where the return type is su±ciently obvious it may be omitted
for the sake of brevity.
Most algorithms in this book require parameters, and because we assign no
explicit type to those parameters the type is inferred from the contexts in which
it is used, and the operations performed upon it. Additionally, the name of
the parameter usually acts as the biggest clue to its type. For instance n is a
pseudo-name for a number and so you can assume unless otherwise stated that
n translates to an integer that has the same number of bits as a WORD on a
32 bit machine, similarly l is a pseudo-name for a list where a list is a resizeable
array (e.g. a vector).
The last major point of reference is that we always explicitly end a language
construct. For instance if we wish to close the scope of a for loop we will
explicitly state end for rather than leaving the interpretation of when scopes
are closed to the reader. While implicit scope closure works well in simple code,
in complex cases it can lead to ambiguity.
The pseudocode style that we use within this book is rather straightforward.
All algorithms start with a simple algorithm signature, e.g.

1) algorithm AlgorithmName(arg1, arg2, …, argN)
2) …
n) end AlgorithmName
Immediately after the algorithm signature we list any Pre or Post condi-
tions.
1) algorithm AlgorithmName(n)
2) Pre: n is the value to compute the factorial of
3) n ¸ 0
4) Post: the factorial of n has been computed
5) // …
n) end AlgorithmName

The example above describes an algorithm by the name of AlgorithmName,
which takes a single numeric parameter n. The pre and post conditions follow
the algorithm signature; you should always enforce the pre-conditions of an
algorithm when porting them to your language of choice.
Normally what is listed as a pre-conidition is critical to the algorithms opera-
tion. This may cover things like the actual parameter not being null, or that the
collection passed in must contain at least n items. The post-condition mainly
describes the e®ect of the algorithms operation. An example of a post-condition
might be \The list has been sorted in ascending order”
Because everything we describe is language independent you will need to
make your own mind up on how to best handle pre-conditions. For example,
in the C# target we have implemented, we consider non-conformance to pre-
conditions to be exceptional cases. We provide a message in the exception to
tell the caller why the algorithm has failed to execute normally.

Rate This Content