Available Balance
Zed ilife smallest laptop reviews
April 3, 2017
0
whitsh

Zed ilife laptop, new brand for our family

their laptops are small and what I liked about so I put it in my bag and go everywhere. The look from out side is impressive, but if laptop was as good from inside as out.

 

the storage of this brand is small, but I can live with that since its only to use it at work as side laptop, This is laptop that needs to update its program the minute you use it, due to that the company installed outdated program. why they do that no sure why?

going with start the program and here we go the censor is not working, tried to move the cursor everywhere but not a single movement, why do not know, so I took it the shop to return it as I have to use laptop that is new but not working, but not acceptable to return electronice, but they will fix it, so I had to agree on that.

decided to wait a week as they said it may take two weeks though the company head office is in the same town. just after one week went to them holding the number of the laptop and found out that it is ready in the store, ok why did not you call me guys? No answr from them.

Fine just went home to use it and hope that this time it will be fine. But why to be fine?? The charger do not charge, so I have to keep the laptop charging all the life of the laptop as when I take it off it will not work.

 

bad experience with this laptop, I wish real branch is here so I talk to them, but they are abroad. speed is low if you like surfing

 

320GB. It has un reasonable videocard I bought it at 300 euro. And it has 2 gig

The colors are ok and the memory not bad for side laptop, but I fed up from fixing it always though I bought it in its box

 

My experience with cheap web hosting
March 16, 2017
3
computer

This post may turn out to be a bad review a certain website but after I had discovered bitcoin and faucets I decided to create a faucet of my own by using WordPress and its plugins, however the problem was funding it, I had enough to make a deposit to pay the first few users for the faucet and knew enough about web design to make the faucet however, I know first hand the amount of time and resources that must go into keeping a web server up and running while considering things like QoS(quality of service) to prevent bad reviews for the site so I had no illusions about what it would take to keep something like that up.
My first task was to find a cloud hosting service that could allow my faucet along with all user accounts and local wallets including the server side applications that would be needed to manage the wallets and send payouts. In other words, it wasn’t looking good because I’ll be hosting a service that basically keeps money on a database in the cloud which means my service is dependent on the quality of another service but the question that worried me was how much quality I could expect from a service I chose based on my budget and also because I had an amount of money in my neteller account that wasn’t enough for a withdrawal, I figured it makes more sense to find a hosting service that allows payments from there rather than to leave the money in the account. So I looked to the internet for articles on web hosting services at good rates with my payment option to gather more information about the websites before visiting them and I found a web hosting service that was apparently the cheapest option and this particular website is called hostnesta, the domain name will be excluded because I don’t believe people should visit it the website.
My issues were simple, the hosting service was an inconvenience and what frightened me most was the fact that I cloud not get a secure connection from the website which meant that all my work on my website could be intercepted meaning user accounts, user balances and much more. Another of many issues were how the hosting service was quick to confirm my payment but how long it took to help me with issues. After a few days of use I found that I could not access my website and its directories which meant I couldn’t even upload a page, keep in mind that I also had issues with my DNS which means for all I know, my domain might not have even been registered
After sending the support team an angry message about what horrible people they were I did what I should have done before I sent them money which was, look for website reviews, surely I cant be the only person with issues from them, and I wasn’t. I found more than enough reasons why I shouldn’t even consider hostnesta as a hosting service and one of the reasons should have become clear from the moment I saw the webpage, the review went something like “no company that offers hosting from $0.60 a month can be expected to provide a good service if any at all”, It made sense and I felt like an idiot having being so gullible and especially as a network administrator who would rather leave all his work in the hands of someone that does a worse job of managing a server.
I learnt a very valuable lesson that day, just because its cheap doesn’t mean you should spend your money on it and if you really need something to be done then just do it yourself, there’s no worse feeling than knowing that your’e in a bad situation because of someone else. Today I have left hostnesta to their ways and figured that the money spent isn’t worth what it could take to get it back considering how the support team doesn’t respond. So I have decided to save money wait until I have enough to make my desktop PC a reasonable web server and in the mean time develop my website as I have decided to make it a game. The incident has actually made me distrust the entire cloud system.

cloud computing disadvantages for busniess
February 4, 2017
1
imgres

As cloud service providers take care of a number of clients each day, they can become overwhelmed and may even come up against technical outages. This can lead to your business processes being temporarily suspended. Additionally, if your internet connection is offline, you will not be able to access any of your applications, server or data from the cloud. Security Although cloud service providers implement the best security standards and industry certifications, storing data and important files on external service providers always opens up risks. Using cloud-powered technologies means you need to provide your service provider with access to important business data. Meanwhile, being a public service opens up cloud service providers to security challenges on a routine basis. The ease in procuring and accessing cloud services can also give nefarious users the ability to scan, identify and exploit loopholes and vulnerabilities within a system. For instance, in a multi-tenant cloud architecture where multiple users are hosted on the same server, a hacker might try to break into the data of other users hosted and stored on the same server. However, such exploits and loopholes are not likely to surface, and the likelihood of a compromise is not great. Vendor Lock-In Although cloud service providers promise that the cloud will be flexible to use and integrate, switching cloud services is something that hasn’t yet completely evolved. Organizations may find it difficult to migrate their services from one vendor to another. Hosting and integrating current cloud applications on another platform may throw up interoperability and support issues. For instance, applications developed on Microsoft Development Framework (.Net) might not work properly on the Linux platform. Limited Control Since the cloud infrastructure is entirely owned, managed and monitored by the service provider, it transfers minimal control over to the customer. The customer can only control and manage the applications, data and services operated on top of that, not the backend infrastructure itself. Key administrative tasks such as server shell access, updating and firmware management may not be passed to the customer or end user. It is easy to see how the advantages of cloud computing easily outweigh the drawbacks. Decreased costs, reduced downtime, and less management effort are benefits that speak for themselves.

Cloud services in India by Amazon
August 25, 2016
0
Cloud services in India by AmazonImage source https://pixabay.com/en/cloud-cloud-service-internet-156135/

We were all expecting it to happen for some time now and finally AWS the World’s biggest cloud services provider Amazon Web Services has commenced its services by launching two of its data centers in India on 28th May 2016. I think it is wakeup call for other service providers like IBM, Microsoft and NTT.

The Chief Executive Officer of Amazon Web Services himself was present in Indian economical capital Mumbai for the launch and disclosed that the company has already achieved their initial target of more than seventy five thousand users even before there was a data center in India. He was expecting the number grow at a rapid rate now Amazon has two data centers in India.

Read more

Virtualization – VM Hypervisor
January 4, 2015
0

 

What is Virtualization?

In today’s world, we have high demand of having multiple operating systems or a server or even a multiple storage devices and so on. Therefore to fulfill these needs, we cannot physically have these in multiple quantities; therefore, the concept of virtualization was introduced.

With the help of virtualization, a single hardware can be divided into multiple operating systems, storage devices, network resource, etc. and hence fulfilling the requirement of multiple users.

 

What is a hypervisor?

An essential requirement for cloud computing is the ability of a single physical machine to run multiple virtual machines on it. This is attained through virtualization, where multiple computers appear to be a single one. It is impossible physically to create multiple processors as this will require scheduling under the same machine. Thus, virtualization caters with a higher degree of virtualization. Virtualization offers low cost virtualization, along with support for heterogeneous environment.

Hypervisor is a software which creates and runs virtual machines. The computer running the hypervisor is called the host machine, while each virtual machine running on it is called the guest machine. Hypervisor manages resource allocation and the memory mapping between the guest machine and the host operating system (OS).

It is also known as a manager of a virtual machine. Hypervisor software may vary for different operating system, but the basic function will always be to transform a single hardware into multiple virtual machines. Hypervisor software controls the host machine automatically by allocating the required processor, memory or other resources to all guest operating systems that are running on the host computer without any fails or any problem.

 

Types of hypervisor

There are two types of hypervisor, they are as follows:

 

Type 1 hypervisor or bare-metal hypervisors

has direct access to all the hardware and installs directly on the computer. Type 1 hypervisors cater resource management and provide complete advantage of portability and abstraction of hardware while running multiple virtual machines.

 

Types of VM Hypervisor Type 2 hypervisor  or hosted hypervisors

also cater the execution of multiple virtual machines but it does not provide direct access to the hardware, which ultimately incurs more cost in running the guest OS. So, in type 2 hypervisor the guest OS does not run at its full potential.

 

 

 

 

 

To Read more click here

 

What is Google file system?
January 3, 2015
2

 

Google File System (GFS)

Google requires a strong and very huge data storage system for storing great deal of data and catering its users the ability to access, create and alter data. Google does not manages all this through a large distributed computing environment which is equipped with high power computers. Google manages all the data through its exclusive Google File System (GFS) which is based on the principle of utilizing the capabilities of inexpensive commodity components and allowing hundreds of clients to access the data.
Since GFS deals with large data files, so the core concerns for the programmers were the manageability, scalability, fault tolerance and consistency of the system. GFS was designed by the programmers in a way that it could easily manage large data files and also provide quick access to the users for their desired documents.

 

Google File System Structure & Working

 

Structure of Google File System

It is a vivid fact that the manipulation and accessing of large data files is a time-consuming task and takes up a great deal of network bandwidth. So in order to handle large data files efficiently and allow less access time for users, GFS stores data files by dividing them into chunks of 64 megabytes (MB). Each chunk has a unique identification number (chunk handle) and chunks are replicated on different computers to cater failures. Moreover, chunks also have checksum within them to ensure data integrity.
Google file system contains clusters of computers and within each clusters there is one master server, several chunk servers and several clients. Each file chunks is replicated thrice on different chunk servers, to attain high level of reliability. One replica is called the primary one while the other two are called secondary ones.
The master stores the file system metadata, which include information regarding mapping from files to chunks, current chunk location, namespace and access control information. The master server communicates with chunk servers through Heart Beat messages. Clients are the Google Apps, or Google Docs etc. which place file requests. The chunk servers do not transfer the requested file to the master server. Instead, the chunk servers directly transfer the requested file to the client.

 

Working of Google File System

Google file system works by using two core elements, one is lease and the other is mutation. Mutation includes the changes made to the chunk in a write or append operation. Lease is used for maintaining consistent mutation order across all the replicas. The primary replica is given the chunk lease by the master server. The primary replica picks up a serial mutation order which is followed by the other secondary replicas too. Thus the lease grant order chosen by the master defines the global mutation order and within the lease the serial numbers assigned by the primary define the order of mutations. In GFS a write request by the clients follows the sequence of these numbered steps:

 

1. The client inquires the master about which chunkserver holds the current lease for the chunks and also the location of other secondary replicas.
2. The master server replies back with the location of the primary and secondary replicas. This location is cached at the client side for future mutations, except in cases when the primary replicas becomes out of reach or does not contain the lease.
3. The client pushes the data to the replicas and then sends a write request to the primary replica.
4. The primary replica assigns serial numbers to the mutations and forwards the same serial mutation order to the other secondary replicas.
5. The secondary replicas reply back to the primary intimating that they have completed the write request in the same order as supplied by the primary.
6. The primary replica then informs the client about the completion of write request and incase of errors, also reports them.

 

To Read further, click here

TCP Incast Problem on Data centers in Cloud Computing
December 26, 2014
1
TCP Incast

 

In previous posts, we discussed cloud computing and its business models. Today, we are going to learn about effects of TCP Incast Problem on Data centers in Cloud Computing and solution for TCP Incast.

TCP Incast Problem on Data centers in Cloud Computing

The Effects of TCP Incast Problem on Data centers in Cloud Computing

 

Cloud computing is a domain that requires big data centers to supports its applications and services. Companies like Amazon, E-Bay, Google, etc. use big data centers to cater the users with a wide variety of services like web search, e-commerce, and data storage.

In order to support huge amount of data traffic, data centers require high-capacity links (high burst tolerance), high throughput and low propagation delays.  TCP tends to be the most popularly used protocol over the internet. However, in data centers, TCP is not able to perform well due to the incast problem.
Incast occurs in cloud data centers due to many-to-one communication links. This happens when a server is connected to various nodes via a switch. When the server places a request for data, it is sent to all the nodes simultaneously, which in turn reply at the same time.

This causes a micro burst of all the data coming from the nodes towards a single server (many to one communication). Due to low queuing capacity at the server, the packets start dropping out as the data rate also increases drastically. This eventually leads to throughput collapse at the server side.

 

Solutions for TCP Incast

 

In order to reduce the TCP incast problem, switches having large buffers can be used, but it turns out to be a costly solution and also results in high latency as the buffers get deep.

Another way to cater TCP incast is to reduce TCP’s minimum RTO (Request Time Out), which will help TCP to enter in the retransmission mode quickly and deal with congestion as quickly as possible. However, RTO should not be decreased too much, as it will increase the retransmissions drastically and ultimately choke the bandwidth.

Data Center TCP (DCTCP) is a modified version of TCP, which is specifically designed to cater cloud computing applications and deal with high throughput. DCTCP is based on ECN – Explicit Congestion Notification, which takes congestion feedback and operates accordingly. A threshold value is marked and the number of packets exceeding the threshold value in the queue are marked. The window size is then decreased in accordance with the number of marked bits.

DCTCP works well because it is able to send packets before they are dropped, which results in high throughput. Moreover, DCTCP achieves low latency due to small buffer occupancies.

Apart from incast, TCP in data center environment also suffers from the problems of queue build up and buffer pressure. Due to big data flows, queues build up at the receiver and even short messages have to wait for a long time due to increased latency. Buffer pressure implies that the port bandwidth on a switch is adjusted according to the amount of data sent on it. So if one port on the switch has long flow and the other has short flow, so more bandwidth would be given to the long one and short one would automatically be affected.

 

Click here to read more.

What is cloud computing
December 26, 2014
2

 

Cloud computing meaning

 

Cloud computing implies sharing access to resources through the connected devices of the users anywhere in the world at any time. It revolves around the use of virtualization to share large databases around the globe cost effectively. It provides better scalability and high degree of availability due to the architecture of huge data centers. The core benefit of cloud computing lies with the fact that it involves no upfront cost and has better network bandwidth utilization.

 

Three Types of Cloud Business Models

 

Public Clouds

are the ones which are built over the Internet and can be accessed by anyone who has paid for the service. Such clouds are usually owned by service providers. Some common public clouds include Microsoft Azure, Google App Engine, Amazon AWS etc.

 

Private Clouds

are the ones which are owned and managed by a particular client. Such clouds exist within the domains of a specific organization. These are used internally within an organization or used by partners too. They are based on internal infrastructure of datacenters owned by the company itself.

 

Hybrid Clouds

Hybrid clouds are a mixture of public and private clouds and operate in a middle way between them. They lease public cloud services yet maintain a higher degree of security and privacy.

 

 

Popular Cloud Business Models

 

Infrastructure as a Service (IaaS)

allows users to rent storage, processing, and various resources for running their application. In this model, user do not have the control to manage the underline infrastructure of the cloud, yet the user is given control over OS, storage and selection of various network components. In short, IaaS comprises of communication, computation and storage as a service. Amazon EC2 is an example of IaaS cloud business model.

 

Platform as a Service (Paas)

allows users to have a collaborative software development platform through programming languages and software tools. Here, the user does not have control over underline cloud infrastructure, yet it provides user the platform for application development, testing and operation.

 

Software as a Service (Saas)

is associated with web hosting, which implies the browser initiated application software.

 

Cloud computing & Business Models

 

For more details Click here: