Available Balance
What do you understand about Marriage ???

The history of marriage is often considered under History of the family or legal history.[264]

Ancient world
See also: Marriage in ancient Rome and Ancient Greek wedding customs
Many cultures have legends concerning the origins of marriage. The way in which a marriage is conducted and its rules and ramifications has changed over time, as has the institution itself, depending on the culture or demographic of the time.[265]

According to ancient Hebrew tradition, a wife was seen as being property of high value and was, therefore, usually, carefully looked after.[247][249] Early nomadic communities in the middle east practised a form of marriage known as beena, in which a wife would own a tent of her own, within which she retains complete independence from her husband;[266] this principle appears to survive in parts of early Israelite society, as some early passages of the Bible appear to portray certain wives as each owning a tent as a personal possession[266] (specifically, Jael,[267] Sarah,[268] and Jacob’s wives[269]).

The husband, too, is indirectly implied to have some responsibilities to his wife. The Covenant Code orders “If he take him another; her food, her clothing, and her duty of marriage, shall he not diminish(or lessen)”.[270] If the husband does not provide the first wife with these things, she is to be divorced, without cost to her.[271] The Talmud interprets this as a requirement for a man to provide food and clothing to, and have sex with, each of his wives.[272][clarification needed] However, “duty of marriage” is also interpreted as whatever one does as a married couple, which is more than just sexual activity. And the term diminish, which means to lessen, shows the man must treat her as if he was not married to another.

As a polygynous society, the Israelites did not have any laws that imposed marital fidelity on men.[273][274] However, the prophet Malachi states that none should be faithless to the wife of his youth and that God hates divorce.[275] Adulterous married women, adulterous betrothed women, and the men who slept with them however, were subject to the death penalty by the biblical laws against adultery [276][277][278] According to the Priestly Code of the Book of Numbers, if a pregnant[279] woman was suspected of adultery, she was to be subjected to the Ordeal of Bitter Water,[280] a form of trial by ordeal, but one that took a miracle to convict. The literary prophets indicate that adultery was a frequent occurrence, despite their strong protests against it,[281][282][283][284] and these legal strictnesses.[273]

In Ancient Greece, no specific civil ceremony was required for the creation of a marriage – only mutual agreement and the fact that the couple must regard each other as husband and wife accordingly.[citation needed] Men usually married when they were in their 20s[citation needed] and women in their teens. It has been suggested that these ages made sense for the Greeks because men were generally done with military service or financially established by their late 20s, and marrying a teenage girl ensured ample time for her to bear children, as life expectancies were significantly lower.[citation needed] Married Greek women had few rights in ancient Greek society and were expected to take care of the house and children.[citation needed] Time was an important factor in Greek marriage. For example, there were superstitions that being married during a full moon was good luck and, according to Robert Flacelière, Greeks married in the winter.[citation needed] Inheritance was more important than feelings: a woman whose father dies without male heirs could be forced to marry her nearest male relative – even if she had to divorce her husband first.
There were several types of marriages in ancient Roman society. The traditional (“conventional”) form called conventio in manum required a ceremony with witnesses and was also dissolved with a ceremony.[286] In this type of marriage, a woman lost her family rights of inheritance of her old family and gained them with her new one. She now was subject to the authority of her husband.[citation needed] There was the free marriage known as sine manu. In this arrangement, the wife remained a member of her original family; she stayed under the authority of her father, kept her family rights of inheritance with her old family and did not gain any with the new family.[287] The minimum age of marriage for girls was 12.[288]

Among ancient Germanic tribes, the bride and groom were roughly the same age and generally older than their Roman counterparts, at least according to Tacitus:

The youths partake late of the pleasures of love, and hence pass the age of puberty unexhausted: nor are the virgins hurried into marriage; the same maturity, the same full growth is required: the sexes unite equally matched and robust; and the children inherit the vigor of their parents.[289]

Where Aristotle had set the prime of life at 37 years for men and 18 for women, the Visigothic Code of law in the 7th century placed the prime of life at 20 years for both men and women, after which both presumably married. Tacitus states that ancient Germanic brides were on average about 20 and were roughly the same age as their husbands.[290] Tacitus, however, had never visited the German-speaking lands and most of his information on Germania comes from secondary sources. In addition, Anglo-Saxon women, like those of other Germanic tribes, are marked as women from the age of 12 and older, based on archaeological finds, implying that the age of marriage coincided with puberty.
From the early Christian era (30 to 325 CE), marriage was thought of as primarily a private matter, with no uniform religious or other ceremony being required.[292] However, bishop Ignatius of Antioch writing around 110 to bishop Polycarp of Smyrna exhorts, “[I]t becomes both men and women who marry, to form their union with the approval of the bishop, that their marriage may be according to God, and not after their own lust.”[293]

In 12th century Europe, women took the surname of their husbands and starting in the second half of the 16th century parental consent along with the church’s consent was required for marriage.[294]

With few local exceptions, until 1545, Christian marriages in Europe were by mutual consent, declaration of intention to marry and upon the subsequent physical union of the parties.[295][296] The couple would promise verbally to each other that they would be married to each other; the presence of a priest or witnesses was not required.[297] This promise was known as the “verbum.” If freely given and made in the present tense (e.g., “I marry you”), it was unquestionably binding;[295] if made in the future tense (“I will marry you”), it would constitute a betrothal.

In 1552 a wedding took place in Zufia, Navarre, between Diego de Zufia and Mari-Miguel following the custom as it was in the realm since the Middle Ages, but the man denounced the marriage on the grounds that its validity was conditioned to “riding” her (“si te cabalgo, lo cual dixo de bascuence (…) balvin yo baneça aren senar içateko”). The tribunal of the kingdom rejected the husband’s claim, validating the wedding, but the husband appealed to the tribunal in Zaragoza, and this institution annulled the marriage.[298] According to the Charter of Navarre, the basic union consisted of a civil marriage with no priest required and at least two witnesses, and the contract could be broken using the same formula.[citation needed] The Church in turn lashed out at those who got married twice or thrice in a row while their formers spouses were still alive. In 1563 the Council of Trent, twenty-fourth session, required that a valid marriage must be performed by a priest before two witnesses.[298]

One of the functions of churches from the Middle Ages was to register marriages, which was not obligatory. There was no state involvement in marriage and personal status, with these issues being adjudicated in ecclesiastical courts. During the Middle Ages marriages were arranged, sometimes as early as birth, and these early pledges to marry were often used to ensure treaties between different royal families, nobles, and heirs of fiefdoms. The church resisted these imposed unions, and increased the number of causes for nullification of these arrangements.[294] As Christianity spread during the Roman period and the Middle Ages, the idea of free choice in selecting marriage partners increased and spread with it.[294]

In Medieval Western Europe, later marriage and higher rates of definitive celibacy (the so-called “European marriage pattern”) helped to constrain patriarchy at its most extreme level. For example, Medieval England saw marriage age as variable depending on economic circumstances, with couples delaying marriage until the early twenties when times were bad and falling to the late teens after the Black Death, when there were labor shortages;[299] by appearances, marriage of adolescents was not the norm in England.[300][301] Where the strong influence of classical Celtic and Germanic cultures (which were not rigidly patriarchal)[302][303] helped to offset the Judaeo-Roman patriarchal influence,[304] in Eastern Europe the tradition of early and universal marriage (often in early adolescence)[305] as well as traditional Slavic patrilocal custom[306] led to a greatly inferior status of women at all levels of society.
The average age of marriage for most of Northwestern Europe from 1500 to 1800 was around 25 years of age;[308][309][310] as the Church dictated that both parties had to be at least 21 years of age to marry without the consent of their parents, the bride and groom were roughly the same age, with most brides in their early twenties and most grooms two or three years older,[310] and a substantial number of women married for the first time in their thirties and forties, particularly in urban areas,[311] with the average age at first marriage rising and falling as circumstances dictated. In better times, more people could afford to marry earlier and thus fertility rose and conversely marriages were delayed or forgone when times were bad, thus restricting family size;[312] after the Black Death, the greater availability of profitable jobs allowed more people to marry young and have more children,[313] but the stabilization of the population in the 16th century meant fewer job opportunities and thus more people delaying marriages.[314]

The age of marriage was not absolute, however, as child marriages would occur throughout the Middle Ages and later, with just some of them including:

The 1552 CE marriage between John Somerford and Jane Somerford Brereto, at the ages of 3 and 2, respectively.[40][41]
In the early 1900s, Magnus Hirschfeld surveyed the age of consent in about 50 countries, which he found to often range between 12-16. In the Vatican, the age of consent was 12.[315]
As part of the Protestant Reformation, the role of recording marriages and setting the rules for marriage passed to the state, reflecting Martin Luther’s view that marriage was a “worldly thing”.[316] By the 17th century, many of the Protestant European countries had a state involvement in marriage.

In England, under the Anglican Church, marriage by consent and cohabitation was valid until the passage of Lord Hardwicke’s Act in 1753. This act instituted certain requirements for marriage, including the performance of a religious ceremony observed by witnesses.
As part of the Counter-Reformation, in 1563 the Council of Trent decreed that a Roman Catholic marriage would be recognized only if the marriage ceremony was officiated by a priest with two witnesses. The Council also authorized a Catechism, issued in 1566, which defined marriage as, “The conjugal union of man and woman, contracted between two qualified persons, which obliges them to live together throughout life.”[209]

In the early modern period, John Calvin and his Protestant colleagues reformulated Christian marriage by enacting the Marriage Ordinance of Geneva, which imposed “The dual requirements of state registration and church consecration to constitute marriage”[209] for recognition.

In England and Wales, Lord Hardwicke’s Marriage Act 1753 required a formal ceremony of marriage, thereby curtailing the practice of Fleet Marriage, an irregular or a clandestine marriage.[318] These were clandestine or irregular marriages performed at Fleet Prison, and at hundreds of other places. From the 1690s until the Marriage Act of 1753 as many as 300,000 clandestine marriages were performed at Fleet Prison alone.[319] The Act required a marriage ceremony to be officiated by an Anglican priest in the Anglican Church with two witnesses and registration. The Act did not apply to Jewish marriages or those of Quakers, whose marriages continued to be governed by their own customs.
In England and Wales, since 1837, civil marriages have been recognized as a legal alternative to church marriages under the Marriage Act 1836. In Germany, civil marriages were recognized in 1875. This law permitted a declaration of the marriage before an official clerk of the civil administration, when both spouses affirm their will to marry, to constitute a legally recognized valid and effective marriage, and allowed an optional private clerical marriage ceremony.

In contemporary English common law, a marriage is a voluntary contract by a man and a woman, in which by agreement they choose to become husband and wife.[320] Edvard Westermarck proposed that “the institution of marriage has probably developed out of a primeval habit”.[321]

As of 2000, the average marriage age range was 25–44 years for men and 22–39 years for women.

China
Main article: Chinese marriage
The mythological origin of Chinese marriage is a story about Nüwa and Fu Xi who invented proper marriage procedures after becoming married. In ancient Chinese society, people of the same surname are supposed to consult with their family trees prior to marriage to reduce the potential risk of unintentional incest. Marrying one’s maternal relatives was generally not thought of as incest. Families sometimes intermarried from one generation to another. Over time, Chinese people became more geographically mobile. Individuals remained members of their biological families. When a couple died, the husband and the wife were buried separately in the respective clan’s graveyard. In a maternal marriage a male would become a son-in-law who lived in the wife’s home.

The New Marriage Law of 1950 radically changed Chinese marriage traditions, enforcing monogamy, equality of men and women, and choice in marriage; arranged marriages were the most common type of marriage in China until then. Starting October 2003, it became legal to marry or divorce without authorization from the couple’s work units.[322][clarification needed] Although people with infectious diseases such as AIDS may now marry, marriage is still illegal for the mentally ill.

Rate This Content
What do you understand about Marriage ???

The history of marriage is often considered under History of the family or legal history.[264]

Ancient world
See also: Marriage in ancient Rome and Ancient Greek wedding customs
Many cultures have legends concerning the origins of marriage. The way in which a marriage is conducted and its rules and ramifications has changed over time, as has the institution itself, depending on the culture or demographic of the time.[265]

According to ancient Hebrew tradition, a wife was seen as being property of high value and was, therefore, usually, carefully looked after.[247][249] Early nomadic communities in the middle east practised a form of marriage known as beena, in which a wife would own a tent of her own, within which she retains complete independence from her husband;[266] this principle appears to survive in parts of early Israelite society, as some early passages of the Bible appear to portray certain wives as each owning a tent as a personal possession[266] (specifically, Jael,[267] Sarah,[268] and Jacob’s wives[269]).

The husband, too, is indirectly implied to have some responsibilities to his wife. The Covenant Code orders “If he take him another; her food, her clothing, and her duty of marriage, shall he not diminish(or lessen)”.[270] If the husband does not provide the first wife with these things, she is to be divorced, without cost to her.[271] The Talmud interprets this as a requirement for a man to provide food and clothing to, and have sex with, each of his wives.[272][clarification needed] However, “duty of marriage” is also interpreted as whatever one does as a married couple, which is more than just sexual activity. And the term diminish, which means to lessen, shows the man must treat her as if he was not married to another.

As a polygynous society, the Israelites did not have any laws that imposed marital fidelity on men.[273][274] However, the prophet Malachi states that none should be faithless to the wife of his youth and that God hates divorce.[275] Adulterous married women, adulterous betrothed women, and the men who slept with them however, were subject to the death penalty by the biblical laws against adultery [276][277][278] According to the Priestly Code of the Book of Numbers, if a pregnant[279] woman was suspected of adultery, she was to be subjected to the Ordeal of Bitter Water,[280] a form of trial by ordeal, but one that took a miracle to convict. The literary prophets indicate that adultery was a frequent occurrence, despite their strong protests against it,[281][282][283][284] and these legal strictnesses.[273]

In Ancient Greece, no specific civil ceremony was required for the creation of a marriage – only mutual agreement and the fact that the couple must regard each other as husband and wife accordingly.[citation needed] Men usually married when they were in their 20s[citation needed] and women in their teens. It has been suggested that these ages made sense for the Greeks because men were generally done with military service or financially established by their late 20s, and marrying a teenage girl ensured ample time for her to bear children, as life expectancies were significantly lower.[citation needed] Married Greek women had few rights in ancient Greek society and were expected to take care of the house and children.[citation needed] Time was an important factor in Greek marriage. For example, there were superstitions that being married during a full moon was good luck and, according to Robert Flacelière, Greeks married in the winter.[citation needed] Inheritance was more important than feelings: a woman whose father dies without male heirs could be forced to marry her nearest male relative – even if she had to divorce her husband first.
There were several types of marriages in ancient Roman society. The traditional (“conventional”) form called conventio in manum required a ceremony with witnesses and was also dissolved with a ceremony.[286] In this type of marriage, a woman lost her family rights of inheritance of her old family and gained them with her new one. She now was subject to the authority of her husband.[citation needed] There was the free marriage known as sine manu. In this arrangement, the wife remained a member of her original family; she stayed under the authority of her father, kept her family rights of inheritance with her old family and did not gain any with the new family.[287] The minimum age of marriage for girls was 12.[288]

Among ancient Germanic tribes, the bride and groom were roughly the same age and generally older than their Roman counterparts, at least according to Tacitus:

The youths partake late of the pleasures of love, and hence pass the age of puberty unexhausted: nor are the virgins hurried into marriage; the same maturity, the same full growth is required: the sexes unite equally matched and robust; and the children inherit the vigor of their parents.[289]

Where Aristotle had set the prime of life at 37 years for men and 18 for women, the Visigothic Code of law in the 7th century placed the prime of life at 20 years for both men and women, after which both presumably married. Tacitus states that ancient Germanic brides were on average about 20 and were roughly the same age as their husbands.[290] Tacitus, however, had never visited the German-speaking lands and most of his information on Germania comes from secondary sources. In addition, Anglo-Saxon women, like those of other Germanic tribes, are marked as women from the age of 12 and older, based on archaeological finds, implying that the age of marriage coincided with puberty.
From the early Christian era (30 to 325 CE), marriage was thought of as primarily a private matter, with no uniform religious or other ceremony being required.[292] However, bishop Ignatius of Antioch writing around 110 to bishop Polycarp of Smyrna exhorts, “[I]t becomes both men and women who marry, to form their union with the approval of the bishop, that their marriage may be according to God, and not after their own lust.”[293]

In 12th century Europe, women took the surname of their husbands and starting in the second half of the 16th century parental consent along with the church’s consent was required for marriage.[294]

With few local exceptions, until 1545, Christian marriages in Europe were by mutual consent, declaration of intention to marry and upon the subsequent physical union of the parties.[295][296] The couple would promise verbally to each other that they would be married to each other; the presence of a priest or witnesses was not required.[297] This promise was known as the “verbum.” If freely given and made in the present tense (e.g., “I marry you”), it was unquestionably binding;[295] if made in the future tense (“I will marry you”), it would constitute a betrothal.

In 1552 a wedding took place in Zufia, Navarre, between Diego de Zufia and Mari-Miguel following the custom as it was in the realm since the Middle Ages, but the man denounced the marriage on the grounds that its validity was conditioned to “riding” her (“si te cabalgo, lo cual dixo de bascuence (…) balvin yo baneça aren senar içateko”). The tribunal of the kingdom rejected the husband’s claim, validating the wedding, but the husband appealed to the tribunal in Zaragoza, and this institution annulled the marriage.[298] According to the Charter of Navarre, the basic union consisted of a civil marriage with no priest required and at least two witnesses, and the contract could be broken using the same formula.[citation needed] The Church in turn lashed out at those who got married twice or thrice in a row while their formers spouses were still alive. In 1563 the Council of Trent, twenty-fourth session, required that a valid marriage must be performed by a priest before two witnesses.[298]

One of the functions of churches from the Middle Ages was to register marriages, which was not obligatory. There was no state involvement in marriage and personal status, with these issues being adjudicated in ecclesiastical courts. During the Middle Ages marriages were arranged, sometimes as early as birth, and these early pledges to marry were often used to ensure treaties between different royal families, nobles, and heirs of fiefdoms. The church resisted these imposed unions, and increased the number of causes for nullification of these arrangements.[294] As Christianity spread during the Roman period and the Middle Ages, the idea of free choice in selecting marriage partners increased and spread with it.[294]

In Medieval Western Europe, later marriage and higher rates of definitive celibacy (the so-called “European marriage pattern”) helped to constrain patriarchy at its most extreme level. For example, Medieval England saw marriage age as variable depending on economic circumstances, with couples delaying marriage until the early twenties when times were bad and falling to the late teens after the Black Death, when there were labor shortages;[299] by appearances, marriage of adolescents was not the norm in England.[300][301] Where the strong influence of classical Celtic and Germanic cultures (which were not rigidly patriarchal)[302][303] helped to offset the Judaeo-Roman patriarchal influence,[304] in Eastern Europe the tradition of early and universal marriage (often in early adolescence)[305] as well as traditional Slavic patrilocal custom[306] led to a greatly inferior status of women at all levels of society.
The average age of marriage for most of Northwestern Europe from 1500 to 1800 was around 25 years of age;[308][309][310] as the Church dictated that both parties had to be at least 21 years of age to marry without the consent of their parents, the bride and groom were roughly the same age, with most brides in their early twenties and most grooms two or three years older,[310] and a substantial number of women married for the first time in their thirties and forties, particularly in urban areas,[311] with the average age at first marriage rising and falling as circumstances dictated. In better times, more people could afford to marry earlier and thus fertility rose and conversely marriages were delayed or forgone when times were bad, thus restricting family size;[312] after the Black Death, the greater availability of profitable jobs allowed more people to marry young and have more children,[313] but the stabilization of the population in the 16th century meant fewer job opportunities and thus more people delaying marriages.[314]

The age of marriage was not absolute, however, as child marriages would occur throughout the Middle Ages and later, with just some of them including:

The 1552 CE marriage between John Somerford and Jane Somerford Brereto, at the ages of 3 and 2, respectively.[40][41]
In the early 1900s, Magnus Hirschfeld surveyed the age of consent in about 50 countries, which he found to often range between 12-16. In the Vatican, the age of consent was 12.[315]
As part of the Protestant Reformation, the role of recording marriages and setting the rules for marriage passed to the state, reflecting Martin Luther’s view that marriage was a “worldly thing”.[316] By the 17th century, many of the Protestant European countries had a state involvement in marriage.

In England, under the Anglican Church, marriage by consent and cohabitation was valid until the passage of Lord Hardwicke’s Act in 1753. This act instituted certain requirements for marriage, including the performance of a religious ceremony observed by witnesses.
As part of the Counter-Reformation, in 1563 the Council of Trent decreed that a Roman Catholic marriage would be recognized only if the marriage ceremony was officiated by a priest with two witnesses. The Council also authorized a Catechism, issued in 1566, which defined marriage as, “The conjugal union of man and woman, contracted between two qualified persons, which obliges them to live together throughout life.”[209]

In the early modern period, John Calvin and his Protestant colleagues reformulated Christian marriage by enacting the Marriage Ordinance of Geneva, which imposed “The dual requirements of state registration and church consecration to constitute marriage”[209] for recognition.

In England and Wales, Lord Hardwicke’s Marriage Act 1753 required a formal ceremony of marriage, thereby curtailing the practice of Fleet Marriage, an irregular or a clandestine marriage.[318] These were clandestine or irregular marriages performed at Fleet Prison, and at hundreds of other places. From the 1690s until the Marriage Act of 1753 as many as 300,000 clandestine marriages were performed at Fleet Prison alone.[319] The Act required a marriage ceremony to be officiated by an Anglican priest in the Anglican Church with two witnesses and registration. The Act did not apply to Jewish marriages or those of Quakers, whose marriages continued to be governed by their own customs.
In England and Wales, since 1837, civil marriages have been recognized as a legal alternative to church marriages under the Marriage Act 1836. In Germany, civil marriages were recognized in 1875. This law permitted a declaration of the marriage before an official clerk of the civil administration, when both spouses affirm their will to marry, to constitute a legally recognized valid and effective marriage, and allowed an optional private clerical marriage ceremony.

In contemporary English common law, a marriage is a voluntary contract by a man and a woman, in which by agreement they choose to become husband and wife.[320] Edvard Westermarck proposed that “the institution of marriage has probably developed out of a primeval habit”.[321]

As of 2000, the average marriage age range was 25–44 years for men and 22–39 years for women.

China
Main article: Chinese marriage
The mythological origin of Chinese marriage is a story about Nüwa and Fu Xi who invented proper marriage procedures after becoming married. In ancient Chinese society, people of the same surname are supposed to consult with their family trees prior to marriage to reduce the potential risk of unintentional incest. Marrying one’s maternal relatives was generally not thought of as incest. Families sometimes intermarried from one generation to another. Over time, Chinese people became more geographically mobile. Individuals remained members of their biological families. When a couple died, the husband and the wife were buried separately in the respective clan’s graveyard. In a maternal marriage a male would become a son-in-law who lived in the wife’s home.

The New Marriage Law of 1950 radically changed Chinese marriage traditions, enforcing monogamy, equality of men and women, and choice in marriage; arranged marriages were the most common type of marriage in China until then. Starting October 2003, it became legal to marry or divorce without authorization from the couple’s work units.[322][clarification needed] Although people with infectious diseases such as AIDS may now marry, marriage is still illegal for the mentally ill.

Rate This Content
Many things people don’t know about Animal !!!!
May 24, 2017
0

Animals are multicellular, eukaryotic organisms of the kingdom Animalia (also called Metazoa). The animal kingdom emerged as a clade within Apoikozoa as the sister group to the choanoflagellates. Animals are motile, meaning they can move spontaneously and independently at some point in their lives. Their body plan eventually becomes fixed as they develop, although some undergo a process of metamorphosis later in their lives. All animals are heterotrophs: they must ingest other organisms or their products for sustenance.

Most known animal phyla appeared in the fossil record as marine species during the Cambrian explosion, about 542 million years ago. Animals can be divided broadly into vertebrates and invertebrates. Vertebrates have a backbone or spine (vertebral column), and amount to less than five percent of all described animal species. They include fish, amphibians, reptiles, birds and mammals. The remaining animals are the invertebrates, which lack a backbone. These include molluscs (clams, oysters, octopuses, squid, snails); arthropods (millipedes, centipedes, insects, spiders, scorpions, crabs, lobsters, shrimp); annelids (earthworms, leeches), nematodes (filarial worms, hookworms), flatworms (tapeworms, liver flukes), cnidarians (jellyfish, sea anemones, corals), ctenophores (comb jellies), and sponges. The study of animals is called zoology.
The word “animal” comes from the Latin animalis, meaning having breath, having soul or living being.[3] In everyday non-scientific usage the word excludes humans – that is, “animal” is often used to refer only to non-human members of the kingdom Animalia; often, only closer relatives of humans such as mammals and other vertebrates, are meant.[4] The biological definition of the word refers to all members of the kingdom Animalia, encompassing creatures as diverse as sponges, jellyfish, insects, and humans.
Aristotle divided the living world between animals and plants, and this was followed by Carl Linnaeus, in the first hierarchical classification.[7] In Linnaeus’s original scheme, the animals were one of three kingdoms, divided into the classes of Vermes, Insecta, Pisces, Amphibia, Aves, and Mammalia. Since then the last four have all been subsumed into a single phylum, the Chordata, whereas the various other forms have been separated out.

In 1874, Ernst Haeckel divided the animal kingdom into two subkingdoms: Metazoa (multicellular animals) and Protozoa (single-celled animals).[8] The protozoa were later moved to the kingdom Protista, leaving only the metazoa. Thus Metazoa is now considered a synonym of Animalia.
Animals have several characteristics that set them apart from other living things. Animals are eukaryotic and multicellular,[10] which separates them from bacteria and most protists. They are heterotrophic,[11] generally digesting food in an internal chamber, which separates them from plants and algae.[12] They are also distinguished from plants, algae, and fungi by lacking rigid cell walls.[13] All animals are motile,[14] if only at certain life stages. In most animals, embryos pass through a blastula stage,[15] which is a characteristic exclusive to animals.

Structure
With a few exceptions, most notably the sponges (Phylum Porifera) and Placozoa, animals have bodies differentiated into separate tissues. These include muscles, which are able to contract and control locomotion, and nerve tissues, which send and process signals. Typically, there is also an internal digestive chamber, with one or two openings.[16] Animals with this sort of organization are called metazoans, or eumetazoans when the former is used for animals in general.[17]

All animals have eukaryotic cells, surrounded by a characteristic extracellular matrix composed of collagen and elastic glycoproteins.[18] This may be calcified to form structures like shells, bones, and spicules.[19] During development, it forms a relatively flexible framework[20] upon which cells can move about and be reorganized, making complex structures possible. In contrast, other multicellular organisms, like plants and fungi, have cells held in place by cell walls, and so develop by progressive growth.[16] Also, unique to animal cells are the following intercellular junctions: tight junctions, gap junctions, and desmosomes.
Nearly all animals undergo some form of sexual reproduction.[23] They produce haploid gametes by meiosis (see Origin and function of meiosis). The smaller, motile gametes are spermatozoa and the larger, non-motile gametes are ova.[24] These fuse to form zygotes, which develop into new individuals[25] (see Allogamy).

Many animals are also capable of asexual reproduction.[26] This may take place through parthenogenesis, where fertile eggs are produced without mating, budding, or fragmentation.[27]

A zygote initially develops into a hollow sphere, called a blastula,[28] which undergoes rearrangement and differentiation. In sponges, blastula larvae swim to a new location and develop into a new sponge.[29] In most other groups, the blastula undergoes more complicated rearrangement.[30] It first invaginates to form a gastrula with a digestive chamber, and two separate germ layers—an external ectoderm and an internal endoderm.[31] In most cases, a mesoderm also develops between them.[32] These germ layers then differentiate to form tissues and organs.
During sexual reproduction, mating with a close relative (inbreeding) generally leads to inbreeding depression. For instance, inbreeding was found to increase juvenile mortality in 11 small animal species.[34] Inbreeding depression is considered to be largely due to expression of deleterious recessive mutations.[35] Mating with unrelated or distantly related members of the same species is generally thought to provide the advantage of masking deleterious recessive mutations in progeny.[36] (see Heterosis). Animals have evolved numerous diverse mechanisms for avoiding close inbreeding and promoting outcrossing[37] (see Inbreeding avoidance).

As indicated in the image of chimpanzees, they have adopted dispersal as a way to separate close relatives and prevent inbreeding.[37] Their dispersal route is known as natal dispersal, whereby individuals move away from the area of birth.
n various species, such as the splendid fairywren, females benefit by mating with multiple males, thus producing more offspring of higher genetic quality. Females that are pair bonded to a male of poor genetic quality, as is the case in inbreeding, are more likely to engage in extra-pair copulations in order to improve their reproductive success and the survivability of their offspring.
All animals are heterotrophs, meaning that they feed directly or indirectly on other living things.[39] They are often further subdivided into groups such as carnivores, herbivores, omnivores, and parasites.[40]

Predation is a biological interaction where a predator (a heterotroph that is hunting) feeds on its prey (the organism that is attacked).[41] Predators may or may not kill their prey prior to feeding on them, but the act of predation almost always results in the death of the prey.[42] The other main category of consumption is detritivory, the consumption of dead organic matter.[43] It can at times be difficult to separate the two feeding behaviours, for example, where parasitic species prey on a host organism and then lay their eggs on it for their offspring to feed on its decaying corpse. Selective pressures imposed on one another has led to an evolutionary arms race between prey and predator, resulting in various antipredator adaptations.[44]

Most animals indirectly use the energy of sunlight by eating plants or plant-eating animals. Most plants use light to convert inorganic molecules in their environment into carbohydrates, fats, proteins and other biomolecules, characteristically containing reduced carbon in the form of carbon-hydrogen bonds. Starting with carbon dioxide (CO2) and water (H2O), photosynthesis converts the energy of sunlight into chemical energy in the form of simple sugars (e.g., glucose), with the release of molecular oxygen. These sugars are then used as the building blocks for plant growth, including the production of other biomolecules.[16] When an animal eats plants (or eats other animals which have eaten plants), the reduced carbon compounds in the food become a source of energy and building materials for the animal.[45] They are either used directly to help the animal grow, or broken down, releasing stored solar energy, and giving the animal the energy required for motion.[46][47]

Animals living close to hydrothermal vents and cold seeps on the ocean floor are not dependent on the energy of sunlight.[48] Instead chemosynthetic archaea and bacteria form the base of the food chain.
Animals are generally considered to have emerged within flagellated eukaryota.[51] Their closest known living relatives are the choanoflagellates, collared flagellates that have a morphology similar to the choanocytes of certain sponges.[52] Molecular studies place animals in a supergroup called the opisthokonts, which also include the choanoflagellates, fungi and a few small parasitic protists.[53] The name comes from the posterior location of the flagellum in motile cells, such as most animal spermatozoa, whereas other eukaryotes tend to have anterior flagella.[54]

The first fossils that might represent animals appear in the Trezona Formation at Trezona Bore, West Central Flinders, South Australia.[55] These fossils are interpreted as being early sponges. They were found in 665-million-year-old rock.[55]

The next oldest possible animal fossils are found towards the end of the Precambrian, around 610 million years ago, and are known as the Ediacaran or Vendian biota.[56] These are difficult to relate to later fossils, however. Some may represent precursors of modern phyla, but they may be separate groups, and it is possible they are not really animals at all.[57]

Aside from them, most known animal phyla make a more or less simultaneous appearance during the Cambrian period, about 542 million years ago.[58] It is still disputed whether this event, called the Cambrian explosion, is due to a rapid divergence between different groups or due to a change in conditions that made fossilization possible.

Some palaeontologists suggest that animals appeared much earlier than the Cambrian explosion, possibly as early as 1 billion years ago.[59] Trace fossils such as tracks and burrows found in the Tonian period indicate the presence of triploblastic worms, like metazoans, roughly as large (about 5 mm wide) and complex as earthworms.[60] During the beginning of the Tonian period around 1 billion years ago, there was a decrease in Stromatolite diversity, which may indicate the appearance of grazing animals, since stromatolite diversity increased when grazing animals became extinct at the End Permian and End Ordovician extinction events, and decreased shortly after the grazer populations recovered. However the discovery that tracks very similar to these early trace fossils are produced today by the giant single-celled protist Gromia sphaerica casts doubt on their interpretation as evidence of early animal evolution.
Traditional morphological and modern molecular phylogenetic analysis have both recognized a major evolutionary transition from “non-bilaterian” animals, which are those lacking a bilaterally symmetric body plan (Porifera, Ctenophora, Cnidaria and Placozoa), to “bilaterian” animals (Bilateria) whose body plans display bilateral symmetry. The latter are further classified based on a major division between Deuterostomes and Protostomes. The relationships among non-bilaterian animals are disputed, but all bilaterian animals are thought to form a monophyletic group. Current understanding of the relationships among the major groups of animals is summarized by the following cladogram:[63]

Apoikozoa

Choanoflagellata

Animal

Porifera

Placozoa

Ctenophora

Cnidaria

Bilateria

Deuterostomes

Protostomes

Ecdysozoa

Lophotrochozoa

Non-bilaterian animals: Porifera, Placozoa, Ctenophora, Cnidaria
Several animal phyla are recognized for their lack of bilateral symmetry, and are thought to have diverged from other animals early in evolution. Among these, the sponges (Porifera) were long thought to have diverged first, representing the oldest animal phylum.[64] They lack the complex organization found in most other phyla.[65] Their cells are differentiated, but in most cases not organized into distinct tissues.[66] Sponges typically feed by drawing in water through pores.[67] However, a series of phylogenomic studies from 2008-2015 have found support for Ctenophora, or comb jellies, as the basal lineage of animals.[68][69][70][71] This result has been controversial, since it would imply that sponges may not be so primitive, but may instead be secondarily simplified.[68] Other researchers have argued that the placement of Ctenophora as the earliest-diverging animal phylum is a statistical anomaly caused by the high rate of evolution in ctenophore genomes.[72][73][74][75]

Among the other phyla, the Ctenophora and the Cnidaria, which includes sea anemones, corals, and jellyfish, are radially symmetric and have digestive chambers with a single opening, which serves as both the mouth and the anus.[76] Both have distinct tissues, but they are not organized into organs.[77] There are only two main germ layers, the ectoderm and endoderm, with only scattered cells between them. As such, these animals are sometimes called diploblastic.[78] The tiny placozoans are similar, but they do not have a permanent digestive chamber.

The Myxozoa, microscopic parasites that were originally considered Protozoa, are now believed to have evolved within Cnidaria.
Bilaterian animals
The remaining animals form a monophyletic group called the Bilateria. For the most part, they are bilaterally symmetric, and often have a specialized head with feeding and sensory organs. The body is triploblastic, i.e. all three germ layers are well-developed, and tissues form distinct organs. The digestive chamber has two openings, a mouth and an anus, and there is also an internal body cavity called a coelom or pseudocoelom. There are exceptions to each of these characteristics, however—for instance adult echinoderms are radially symmetric, and certain parasitic worms have extremely simplified body structures.

Genetic studies have considerably changed our understanding of the relationships within the Bilateria. Most appear to belong to two major lineages: the deuterostomes and the protostomes, the latter of which includes the Ecdysozoa, and Lophotrochozoa. The Chaetognatha or arrow worms have been traditionally classified as deuterostomes, though recent molecular studies have identified this group as a basal protostome lineage.[80]

In addition, there are a few small groups of bilaterians with relatively cryptic morphology whose relationships with other animals are not well-established. For example, recent molecular studies have identified Acoelomorpha and Xenoturbella as comprising a monophyletic group,[81][82][83] but studies disagree as to whether this group evolved from within deuterostomes,[82] or whether it represents the sister group to all other bilaterian animals (Nephrozoa).[84][85] Other groups of uncertain affinity include the Rhombozoa and Orthonectida. One phyla, the Monoblastozoa, was described by a scientist in 1892, but so far there have been no evidence of its existence.
Deuterostomes differ from protostomes in several ways. Animals from both groups possess a complete digestive tract. However, in protostomes, the first opening of the gut to appear in embryological development (the archenteron) develops into the mouth, with the anus forming secondarily. In deuterostomes the anus forms first, with the mouth developing secondarily.[87] In most protostomes, cells simply fill in the interior of the gastrula to form the mesoderm, called schizocoelous development, but in deuterostomes, it forms through invagination of the endoderm, called enterocoelic pouching.[88] Deuterostome embryos undergo radial cleavage during cell division, while protostomes undergo spiral cleavage.[89]

All this suggests the deuterostomes and protostomes are separate, monophyletic lineages. The main phyla of deuterostomes are the Echinodermata and Chordata.[90] The former are radially symmetric and exclusively marine, such as starfish, sea urchins, and sea cucumbers.[91] The latter are dominated by the vertebrates, animals with backbones.[92] These include fish, amphibians, reptiles, birds, and mammals.[93]

In addition to these, the deuterostomes also include the Hemichordata, or acorn worms, which are thought to be closely related to Echinodermata forming a group known as Ambulacraria.[94][95] Although they are not especially prominent today, the important fossil graptolites may belong to this group.
The Ecdysozoa are protostomes, named after the common trait of growth by moulting or ecdysis.[97] The largest animal phylum belongs here, the Arthropoda, including insects, spiders, crabs, and their kin. All these organisms have a body divided into repeating segments, typically with paired appendages. Two smaller phyla, the Onychophora and Tardigrada, are close relatives of the arthropods and share these traits. The ecdysozoans also include the Nematoda or roundworms, perhaps the second largest animal phylum. Roundworms are typically microscopic, and occur in nearly every environment where there is water.[98] A number are important parasites.[99] Smaller phyla related to them are the Nematomorpha or horsehair worms, and the Kinorhyncha, Priapulida, and Loricifera. These groups have a reduced coelom, called a pseudocoelom.
Lophotrochozoa
The Lophotrochozoa, evolved within Protostomia, include two of the most successful animal phyla, the Mollusca and Annelida.[100][101] The former, which is the second-largest animal phylum by number of described species, includes animals such as snails, clams, and squids, and the latter comprises the segmented worms, such as earthworms and leeches. These two groups have long been considered close relatives because of the common presence of trochophore larvae, but the annelids were considered closer to the arthropods because they are both segmented.[102] Now, this is generally considered convergent evolution, owing to many morphological and genetic differences between the two phyla.[103] Lophotrochozoa also includes the Nemertea or ribbon worms, the Sipuncula, and several phyla that have a ring of ciliated tentacles around the mouth, called a lophophore.[104] These were traditionally grouped together as the lophophorates.[105] but it now appears that the lophophorate group may be paraphyletic,[106] with some closer to the nemerteans and some to the molluscs and annelids.[107][108] They include the Brachiopoda or lamp shells, which are prominent in the fossil record, the Entoprocta, the Phoronida, and possibly the Bryozoa or moss animals.[109]

The Platyzoa include the phylum Platyhelminthes, the flatworms.[110] These were originally considered some of the most primitive Bilateria, but it now appears they developed from more complex ancestors.[111] A number of parasites are included in this group, such as the flukes and tapeworms.[110] Flatworms are acoelomates, lacking a body cavity, as are their closest relatives, the microscopic Gastrotricha.[112] The other platyzoan phyla are mostly microscopic and pseudocoelomate. The most prominent are the Rotifera or rotifers, which are common in aqueous environments. They also include the Acanthocephala or spiny-headed worms, the Gnathostomulida, Micrognathozoa, and possibly the Cycliophora.[113] These groups share the presence of complex jaws, from which they are called the Gnathifera.

A relationship between the Brachiopoda and Nemertea has been suggested by molecular data.[114] A second study has also suggested this relationship.[115] This latter study also suggested that Annelida and Mollusca may be sister clades. Another study has suggested that Annelida and Mollusca are sister clades.[116] This clade has been termed the Neotrochozoa.

Rate This Content
Many things people don’t know about Animal !!!!
May 24, 2017
0
Woe betide those who have not understood animals

Animals are multicellular, eukaryotic organisms of the kingdom Animalia (also called Metazoa). The animal kingdom emerged as a clade within Apoikozoa as the sister group to the choanoflagellates. Animals are motile, meaning they can move spontaneously and independently at some point in their lives. Their body plan eventually becomes fixed as they develop, although some undergo a process of metamorphosis later in their lives. All animals are heterotrophs: they must ingest other organisms or their products for sustenance.

Most known animal phyla appeared in the fossil record as marine species during the Cambrian explosion, about 542 million years ago. Animals can be divided broadly into vertebrates and invertebrates. Vertebrates have a backbone or spine (vertebral column), and amount to less than five percent of all described animal species. They include fish, amphibians, reptiles, birds and mammals. The remaining animals are the invertebrates, which lack a backbone. These include molluscs (clams, oysters, octopuses, squid, snails); arthropods (millipedes, centipedes, insects, spiders, scorpions, crabs, lobsters, shrimp); annelids (earthworms, leeches), nematodes (filarial worms, hookworms), flatworms (tapeworms, liver flukes), cnidarians (jellyfish, sea anemones, corals), ctenophores (comb jellies), and sponges. The study of animals is called zoology.
The word “animal” comes from the Latin animalis, meaning having breath, having soul or living being.[3] In everyday non-scientific usage the word excludes humans – that is, “animal” is often used to refer only to non-human members of the kingdom Animalia; often, only closer relatives of humans such as mammals and other vertebrates, are meant.[4] The biological definition of the word refers to all members of the kingdom Animalia, encompassing creatures as diverse as sponges, jellyfish, insects, and humans.
Aristotle divided the living world between animals and plants, and this was followed by Carl Linnaeus, in the first hierarchical classification.[7] In Linnaeus’s original scheme, the animals were one of three kingdoms, divided into the classes of Vermes, Insecta, Pisces, Amphibia, Aves, and Mammalia. Since then the last four have all been subsumed into a single phylum, the Chordata, whereas the various other forms have been separated out.

In 1874, Ernst Haeckel divided the animal kingdom into two subkingdoms: Metazoa (multicellular animals) and Protozoa (single-celled animals).[8] The protozoa were later moved to the kingdom Protista, leaving only the metazoa. Thus Metazoa is now considered a synonym of Animalia.
Animals have several characteristics that set them apart from other living things. Animals are eukaryotic and multicellular,[10] which separates them from bacteria and most protists. They are heterotrophic,[11] generally digesting food in an internal chamber, which separates them from plants and algae.[12] They are also distinguished from plants, algae, and fungi by lacking rigid cell walls.[13] All animals are motile,[14] if only at certain life stages. In most animals, embryos pass through a blastula stage,[15] which is a characteristic exclusive to animals.

Structure
With a few exceptions, most notably the sponges (Phylum Porifera) and Placozoa, animals have bodies differentiated into separate tissues. These include muscles, which are able to contract and control locomotion, and nerve tissues, which send and process signals. Typically, there is also an internal digestive chamber, with one or two openings.[16] Animals with this sort of organization are called metazoans, or eumetazoans when the former is used for animals in general.[17]

All animals have eukaryotic cells, surrounded by a characteristic extracellular matrix composed of collagen and elastic glycoproteins.[18] This may be calcified to form structures like shells, bones, and spicules.[19] During development, it forms a relatively flexible framework[20] upon which cells can move about and be reorganized, making complex structures possible. In contrast, other multicellular organisms, like plants and fungi, have cells held in place by cell walls, and so develop by progressive growth.[16] Also, unique to animal cells are the following intercellular junctions: tight junctions, gap junctions, and desmosomes.
Nearly all animals undergo some form of sexual reproduction.[23] They produce haploid gametes by meiosis (see Origin and function of meiosis). The smaller, motile gametes are spermatozoa and the larger, non-motile gametes are ova.[24] These fuse to form zygotes, which develop into new individuals[25] (see Allogamy).

Many animals are also capable of asexual reproduction.[26] This may take place through parthenogenesis, where fertile eggs are produced without mating, budding, or fragmentation.[27]

A zygote initially develops into a hollow sphere, called a blastula,[28] which undergoes rearrangement and differentiation. In sponges, blastula larvae swim to a new location and develop into a new sponge.[29] In most other groups, the blastula undergoes more complicated rearrangement.[30] It first invaginates to form a gastrula with a digestive chamber, and two separate germ layers—an external ectoderm and an internal endoderm.[31] In most cases, a mesoderm also develops between them.[32] These germ layers then differentiate to form tissues and organs.
During sexual reproduction, mating with a close relative (inbreeding) generally leads to inbreeding depression. For instance, inbreeding was found to increase juvenile mortality in 11 small animal species.[34] Inbreeding depression is considered to be largely due to expression of deleterious recessive mutations.[35] Mating with unrelated or distantly related members of the same species is generally thought to provide the advantage of masking deleterious recessive mutations in progeny.[36] (see Heterosis). Animals have evolved numerous diverse mechanisms for avoiding close inbreeding and promoting outcrossing[37] (see Inbreeding avoidance).

As indicated in the image of chimpanzees, they have adopted dispersal as a way to separate close relatives and prevent inbreeding.[37] Their dispersal route is known as natal dispersal, whereby individuals move away from the area of birth.
n various species, such as the splendid fairywren, females benefit by mating with multiple males, thus producing more offspring of higher genetic quality. Females that are pair bonded to a male of poor genetic quality, as is the case in inbreeding, are more likely to engage in extra-pair copulations in order to improve their reproductive success and the survivability of their offspring.
All animals are heterotrophs, meaning that they feed directly or indirectly on other living things.[39] They are often further subdivided into groups such as carnivores, herbivores, omnivores, and parasites.[40]

Predation is a biological interaction where a predator (a heterotroph that is hunting) feeds on its prey (the organism that is attacked).[41] Predators may or may not kill their prey prior to feeding on them, but the act of predation almost always results in the death of the prey.[42] The other main category of consumption is detritivory, the consumption of dead organic matter.[43] It can at times be difficult to separate the two feeding behaviours, for example, where parasitic species prey on a host organism and then lay their eggs on it for their offspring to feed on its decaying corpse. Selective pressures imposed on one another has led to an evolutionary arms race between prey and predator, resulting in various antipredator adaptations.[44]

Most animals indirectly use the energy of sunlight by eating plants or plant-eating animals. Most plants use light to convert inorganic molecules in their environment into carbohydrates, fats, proteins and other biomolecules, characteristically containing reduced carbon in the form of carbon-hydrogen bonds. Starting with carbon dioxide (CO2) and water (H2O), photosynthesis converts the energy of sunlight into chemical energy in the form of simple sugars (e.g., glucose), with the release of molecular oxygen. These sugars are then used as the building blocks for plant growth, including the production of other biomolecules.[16] When an animal eats plants (or eats other animals which have eaten plants), the reduced carbon compounds in the food become a source of energy and building materials for the animal.[45] They are either used directly to help the animal grow, or broken down, releasing stored solar energy, and giving the animal the energy required for motion.[46][47]

Animals living close to hydrothermal vents and cold seeps on the ocean floor are not dependent on the energy of sunlight.[48] Instead chemosynthetic archaea and bacteria form the base of the food chain.
Animals are generally considered to have emerged within flagellated eukaryota.[51] Their closest known living relatives are the choanoflagellates, collared flagellates that have a morphology similar to the choanocytes of certain sponges.[52] Molecular studies place animals in a supergroup called the opisthokonts, which also include the choanoflagellates, fungi and a few small parasitic protists.[53] The name comes from the posterior location of the flagellum in motile cells, such as most animal spermatozoa, whereas other eukaryotes tend to have anterior flagella.[54]

The first fossils that might represent animals appear in the Trezona Formation at Trezona Bore, West Central Flinders, South Australia.[55] These fossils are interpreted as being early sponges. They were found in 665-million-year-old rock.[55]

The next oldest possible animal fossils are found towards the end of the Precambrian, around 610 million years ago, and are known as the Ediacaran or Vendian biota.[56] These are difficult to relate to later fossils, however. Some may represent precursors of modern phyla, but they may be separate groups, and it is possible they are not really animals at all.[57]

Aside from them, most known animal phyla make a more or less simultaneous appearance during the Cambrian period, about 542 million years ago.[58] It is still disputed whether this event, called the Cambrian explosion, is due to a rapid divergence between different groups or due to a change in conditions that made fossilization possible.

Some palaeontologists suggest that animals appeared much earlier than the Cambrian explosion, possibly as early as 1 billion years ago.[59] Trace fossils such as tracks and burrows found in the Tonian period indicate the presence of triploblastic worms, like metazoans, roughly as large (about 5 mm wide) and complex as earthworms.[60] During the beginning of the Tonian period around 1 billion years ago, there was a decrease in Stromatolite diversity, which may indicate the appearance of grazing animals, since stromatolite diversity increased when grazing animals became extinct at the End Permian and End Ordovician extinction events, and decreased shortly after the grazer populations recovered. However the discovery that tracks very similar to these early trace fossils are produced today by the giant single-celled protist Gromia sphaerica casts doubt on their interpretation as evidence of early animal evolution.
Traditional morphological and modern molecular phylogenetic analysis have both recognized a major evolutionary transition from “non-bilaterian” animals, which are those lacking a bilaterally symmetric body plan (Porifera, Ctenophora, Cnidaria and Placozoa), to “bilaterian” animals (Bilateria) whose body plans display bilateral symmetry. The latter are further classified based on a major division between Deuterostomes and Protostomes. The relationships among non-bilaterian animals are disputed, but all bilaterian animals are thought to form a monophyletic group. Current understanding of the relationships among the major groups of animals is summarized by the following cladogram:[63]

Apoikozoa

Choanoflagellata

Animal

Porifera

Placozoa

Ctenophora

Cnidaria

Bilateria

Deuterostomes

Protostomes

Ecdysozoa

Lophotrochozoa

Non-bilaterian animals: Porifera, Placozoa, Ctenophora, Cnidaria
Several animal phyla are recognized for their lack of bilateral symmetry, and are thought to have diverged from other animals early in evolution. Among these, the sponges (Porifera) were long thought to have diverged first, representing the oldest animal phylum.[64] They lack the complex organization found in most other phyla.[65] Their cells are differentiated, but in most cases not organized into distinct tissues.[66] Sponges typically feed by drawing in water through pores.[67] However, a series of phylogenomic studies from 2008-2015 have found support for Ctenophora, or comb jellies, as the basal lineage of animals.[68][69][70][71] This result has been controversial, since it would imply that sponges may not be so primitive, but may instead be secondarily simplified.[68] Other researchers have argued that the placement of Ctenophora as the earliest-diverging animal phylum is a statistical anomaly caused by the high rate of evolution in ctenophore genomes.[72][73][74][75]

Among the other phyla, the Ctenophora and the Cnidaria, which includes sea anemones, corals, and jellyfish, are radially symmetric and have digestive chambers with a single opening, which serves as both the mouth and the anus.[76] Both have distinct tissues, but they are not organized into organs.[77] There are only two main germ layers, the ectoderm and endoderm, with only scattered cells between them. As such, these animals are sometimes called diploblastic.[78] The tiny placozoans are similar, but they do not have a permanent digestive chamber.

The Myxozoa, microscopic parasites that were originally considered Protozoa, are now believed to have evolved within Cnidaria.
Bilaterian animals
The remaining animals form a monophyletic group called the Bilateria. For the most part, they are bilaterally symmetric, and often have a specialized head with feeding and sensory organs. The body is triploblastic, i.e. all three germ layers are well-developed, and tissues form distinct organs. The digestive chamber has two openings, a mouth and an anus, and there is also an internal body cavity called a coelom or pseudocoelom. There are exceptions to each of these characteristics, however—for instance adult echinoderms are radially symmetric, and certain parasitic worms have extremely simplified body structures.

Genetic studies have considerably changed our understanding of the relationships within the Bilateria. Most appear to belong to two major lineages: the deuterostomes and the protostomes, the latter of which includes the Ecdysozoa, and Lophotrochozoa. The Chaetognatha or arrow worms have been traditionally classified as deuterostomes, though recent molecular studies have identified this group as a basal protostome lineage.[80]

In addition, there are a few small groups of bilaterians with relatively cryptic morphology whose relationships with other animals are not well-established. For example, recent molecular studies have identified Acoelomorpha and Xenoturbella as comprising a monophyletic group,[81][82][83] but studies disagree as to whether this group evolved from within deuterostomes,[82] or whether it represents the sister group to all other bilaterian animals (Nephrozoa).[84][85] Other groups of uncertain affinity include the Rhombozoa and Orthonectida. One phyla, the Monoblastozoa, was described by a scientist in 1892, but so far there have been no evidence of its existence.
Deuterostomes differ from protostomes in several ways. Animals from both groups possess a complete digestive tract. However, in protostomes, the first opening of the gut to appear in embryological development (the archenteron) develops into the mouth, with the anus forming secondarily. In deuterostomes the anus forms first, with the mouth developing secondarily.[87] In most protostomes, cells simply fill in the interior of the gastrula to form the mesoderm, called schizocoelous development, but in deuterostomes, it forms through invagination of the endoderm, called enterocoelic pouching.[88] Deuterostome embryos undergo radial cleavage during cell division, while protostomes undergo spiral cleavage.[89]

All this suggests the deuterostomes and protostomes are separate, monophyletic lineages. The main phyla of deuterostomes are the Echinodermata and Chordata.[90] The former are radially symmetric and exclusively marine, such as starfish, sea urchins, and sea cucumbers.[91] The latter are dominated by the vertebrates, animals with backbones.[92] These include fish, amphibians, reptiles, birds, and mammals.[93]

In addition to these, the deuterostomes also include the Hemichordata, or acorn worms, which are thought to be closely related to Echinodermata forming a group known as Ambulacraria.[94][95] Although they are not especially prominent today, the important fossil graptolites may belong to this group.
The Ecdysozoa are protostomes, named after the common trait of growth by moulting or ecdysis.[97] The largest animal phylum belongs here, the Arthropoda, including insects, spiders, crabs, and their kin. All these organisms have a body divided into repeating segments, typically with paired appendages. Two smaller phyla, the Onychophora and Tardigrada, are close relatives of the arthropods and share these traits. The ecdysozoans also include the Nematoda or roundworms, perhaps the second largest animal phylum. Roundworms are typically microscopic, and occur in nearly every environment where there is water.[98] A number are important parasites.[99] Smaller phyla related to them are the Nematomorpha or horsehair worms, and the Kinorhyncha, Priapulida, and Loricifera. These groups have a reduced coelom, called a pseudocoelom.
Lophotrochozoa
The Lophotrochozoa, evolved within Protostomia, include two of the most successful animal phyla, the Mollusca and Annelida.[100][101] The former, which is the second-largest animal phylum by number of described species, includes animals such as snails, clams, and squids, and the latter comprises the segmented worms, such as earthworms and leeches. These two groups have long been considered close relatives because of the common presence of trochophore larvae, but the annelids were considered closer to the arthropods because they are both segmented.[102] Now, this is generally considered convergent evolution, owing to many morphological and genetic differences between the two phyla.[103] Lophotrochozoa also includes the Nemertea or ribbon worms, the Sipuncula, and several phyla that have a ring of ciliated tentacles around the mouth, called a lophophore.[104] These were traditionally grouped together as the lophophorates.[105] but it now appears that the lophophorate group may be paraphyletic,[106] with some closer to the nemerteans and some to the molluscs and annelids.[107][108] They include the Brachiopoda or lamp shells, which are prominent in the fossil record, the Entoprocta, the Phoronida, and possibly the Bryozoa or moss animals.[109]

The Platyzoa include the phylum Platyhelminthes, the flatworms.[110] These were originally considered some of the most primitive Bilateria, but it now appears they developed from more complex ancestors.[111] A number of parasites are included in this group, such as the flukes and tapeworms.[110] Flatworms are acoelomates, lacking a body cavity, as are their closest relatives, the microscopic Gastrotricha.[112] The other platyzoan phyla are mostly microscopic and pseudocoelomate. The most prominent are the Rotifera or rotifers, which are common in aqueous environments. They also include the Acanthocephala or spiny-headed worms, the Gnathostomulida, Micrognathozoa, and possibly the Cycliophora.[113] These groups share the presence of complex jaws, from which they are called the Gnathifera.

A relationship between the Brachiopoda and Nemertea has been suggested by molecular data.[114] A second study has also suggested this relationship.[115] This latter study also suggested that Annelida and Mollusca may be sister clades. Another study has suggested that Annelida and Mollusca are sister clades.[116] This clade has been termed the Neotrochozoa.

Rate This Content
wow wow wow office of special plan
May 21, 2017
0

The Office of Special Plans (OSP), which existed from September 2002 to June 2003, was a Pentagon unit created by Paul Wolfowitz and Douglas Feith, and headed by Feith, as charged by then-United States Secretary of Defense Donald Rumsfeld, to supply senior George W. Bush administration officials with raw intelligence (unvetted by intelligence analysts, see Stovepiping) pertaining to Iraq.[1] A similar unit, called the Iranian Directorate, was created several years later, in 2006, to deal with intelligence on Iran.[2]
In an interview with the Scottish Sunday Herald, former Central Intelligence Agency (CIA) officer Larry C. Johnson said the OSP was “dangerous for US national security and a threat to world peace. [The OSP] lied and manipulated intelligence to further its agenda of removing Saddam. It’s a group of ideologues with pre-determined notions of truth and reality. They take bits of intelligence to support their agenda and ignore anything contrary. They should be eliminated.”[3]

Seymour Hersh writes that, according to an unnamed Pentagon adviser, “[OSP] was created in order to find evidence of what Wolfowitz and his boss, Defense Secretary Donald Rumsfeld, wanted to be true—that Saddam Hussein had close ties to Al Qaeda, and that Iraq had an enormous arsenal of chemical, biological, and possibly even nuclear weapons (WMD) that threatened the region and, potentially, the United States. […] ‘The agency [CIA] was out to disprove linkage between Iraq and terrorism,’ the Pentagon adviser told me. ‘That’s what drove them. If you’ve ever worked with intelligence data, you can see the ingrained views at C.I.A. that color the way it sees data.’ The goal of Special Plans, he said, was ‘to put the data under the microscope to reveal what the intelligence community can’t see.'”[4]

These allegations are supported by an annex to the first part of Senate Intelligence Committee’s Report of Pre-war Intelligence on Iraq published in July 2004. The review, which was highly critical of the CIA’s Iraq intelligence generally but found its judgments were right on the lack of an Iraq-al Qaeda relationship, suggests that the OSP, if connected to an “Iraqi intelligence cell” also headed by Douglas Feith which is described in the annex, sought to discredit and cast doubt on CIA analysis in an effort to establish a connection between Saddam Hussein and terrorism. In one instance, in response to a cautious CIA report, “Iraq and al-Qa’eda: A Murky Relationship”, the annex relates that “one of the individuals working for the [intelligence cell led by Feith] stated that the June [2002] report, ‘…should be read for content only – and CIA’s interpretation ought to be ignored.'”[5]

Douglas Feith called the office’s report a much-needed critique of the CIA’s intelligence. “It’s healthy to criticize the CIA’s intelligence”, Feith said. “What the people in the Pentagon were doing was right. It was good government.” Feith also rejected accusations he attempted to link Iraq to a formal relationship with Al Qaeda. “No one in my office ever claimed there was an operational relationship”, Feith said. “There was a relationship.”[6]

In another instance, an “Iraqi intelligence cell” briefing to Rumsfeld and Wolfowitz in August 2002 condemned the CIA’s intelligence assessment techniques and denounced the CIA’s “consistent underestimation” of matters dealing with the alleged Iraq-al-Qaeda co-operation. In September 2002, two days before the CIA’s final assessment of the Iraq-al Qaeda relationship, Feith briefed senior advisers to Dick Cheney and Condoleezza Rice, undercutting the CIA’s credibility and alleging “fundamental problems” with CIA intelligence-gathering. As reported in the conservative British newspaper The Daily Telegraph, “Senator Jay Rockefeller, senior Democrat on the [Senate] committee, said that Mr Feith’s cell may even have undertaken ‘unlawful’ intelligence-gathering initiatives.”[7]

In February 2007, the Pentagon’s inspector general issued a report that concluded that Feith’s office “developed, produced, and then disseminated alternative intelligence assessments on the Iraq and al Qaida relationship, which included some conclusions that were inconsistent with the consensus of the Intelligence Community, to senior decision-makers.” The report found that these actions were “inappropriate” though not “illegal.” Senator Carl Levin, Chair of the Senate Armed Services Committee, stated that “The bottom line is that intelligence relating to the Iraq-al-Qaeda relationship, which included some conclusions that were inconsistent with the consensus of the Intelligence Community, to senior decision-makers.” The report found that these actions were “inappropriate” though not “illegal.” Senator Carl Levin, Chair of the Senate Armed Services Committee, stated that “The bottom line is that intelligence relating to the Iraq-al-Qaeda relationship was manipulated by high-ranking officials in the Department of Defense to support the administration’s decision to invade Iraq. The inspector general’s report is a devastating condemnation of inappropriate activities in the DOD policy office that helped take this nation to war.”[8] At Senator Levin’s insistence, on April 6, 2007, the Pentagon’s Inspector General’s Report was declassified and released to the public.[9]

Feith stated that he “felt vindicated” by the report.[10] He told the Washington Post that his office produced “a criticism of the consensus of the intelligence community, and in presenting it I was not endorsing its substance.”[8]

Feith also said the inspector general’s report amounted to circular logic: “The people in my office were doing a criticism of the intelligence community consensus”, Feith said. “By definition, that criticism varied. If it didn’t vary, they wouldn’t have done the criticism.”[11]
Journalist Larisa Alexandrovna of The Raw Story reported in 2006 that the OSP “deployed several extra-legal and unapproved task force missions” in Iraq both before and after the beginning of combat. The teams operated independently of other operations, occasionally causing confusion on the battlefield. The teams appear to have had a political rather than military mission; specifically, to find Iraqi intelligence officers willing to come up with evidence of WMD in Iraq whether or not such weapons actually existed:

“They come in the summer of 2003, bringing in Iraqis, interviewing them”, [a source close to the UN Security Council] said. “Then they start talking about WMD and they say to [these Iraqi intelligence officers] that ‘Our President is in trouble. He went to war saying there are WMD and there are no WMD. What can we do? Can you help us?'”[12]
According to the United Nations source, the intelligence officers did not cooperate with the OSP forces because they were aware that forged WMD evidence “would not pass the smell test and could be shown to be not of Iraqi origin and not using Iraqi methodology.”
Larry Franklin, an analyst and Iran expert in the Feith office, has been charged with espionage, as part of a larger FBI investigation (see Lawrence Franklin espionage scandal). The scandal involves passing information regarding United States policy towards Iran to Israel via the American Israel Public Affairs Committee. Feith’s role is also being investigated.[13]

According to The Guardian, Feith’s office had an unconventional relationship with Israel’s intelligence services:

The OSP was an open and largely unfiltered conduit to the White House not only for the Iraqi opposition. It also forged close ties to a parallel, ad hoc intelligence operation inside Ariel Sharon’s office in Israel specifically to bypass Mossad and provide the Bush administration with more alarmist reports on Saddam’s Iraq than Mossad was prepared to authorise.
“None of the Israelis who came were cleared into the Pentagon through normal channels,” said one source familiar with the visits. Instead, they were waved in on Mr Feith’s authority without having to fill in the usual forms.
The exchange of information continued a long-standing relationship with Mr Feith and other Washington neo-conservatives had with Israel’s Likud party.[14]
Allegations have also been made that Pentagon employees in the Feith office have been involved in plans for overthrowing the governments of Iran and Syria.[15]

When Former NSA Chief General Michael Hayden testified before the Senate Hearing on his nomination as Director of Central Intelligence in May 2006, he was questioned by Senator Carl Levin (D-MI) on the pressure exerted by the Office of Special Plans on the intelligence community over the question of Hussein’s links to al-Qaeda. Hayden explained that he was not comfortable with the OSP’s analysis: “I got three great kids, but if you tell me go out and find all the bad things they’ve done, Hayden, I can build you a pretty good dossier, and you’d think they were pretty bad people, because that was what I was looking for and that’s what I’d build up. That would be very wrong. That would be inaccurate. That would be misleading.” He also acknowledged that after “repeated inquiries from the Feith office” he put a disclaimer on NSA intelligence assessments of Iraq/al-Qaeda contacts.[16]

Rate This Content
weapon of mass destruction media coverage, public perception, in popular culture and common hazard symbols..
May 21, 2017
0

In 2004, the Center for International and Security Studies at Maryland (CISSM) released a report[54] examining the media’s coverage of WMD issues during three separate periods: nuclear weapons tests by India and Pakistan in May 1998; the U.S. announcement of evidence of a North Korean nuclear weapons program in October 2002; and revelations about Iran’s nuclear program in May 2003. The CISSM report notes that poor coverage resulted less from political bias among the media than from tired journalistic conventions. The report’s major findings were that:

Most media outlets represented WMD as a monolithic menace, failing to adequately distinguish between weapons programs and actual weapons or to address the real differences among chemical, biological, nuclear, and radiological weapons.
Most journalists accepted the Bush administration’s formulation of the “War on Terror” as a campaign against WMD, in contrast to coverage during the Clinton era, when many journalists made careful distinctions between acts of terrorism and the acquisition and use of WMD.
Many stories stenographically reported the incumbent administration’s perspective on WMD, giving too little critical examination of the way officials framed the events, issues, threats, and policy options.
Too few stories proffered alternative perspectives to official line, a problem exacerbated by the journalistic prioritizing of breaking-news stories and the “inverted pyramid” style of storytelling.
In a separate study published in 2005,[55] a group of researchers assessed the effects reports and retractions in the media had on people’s memory regarding the search for WMD in Iraq during the 2003 Iraq War. The study focused on populations in two coalition countries (Australia and the United States) and one opposed to the war (Germany). Results showed that U.S. citizens generally did not correct initial misconceptions regarding WMD, even following disconfirmation; Australian and German citizens were more responsive to retractions. Dependence on the initial source of information led to a substantial minority of Americans exhibiting false memory that WMD were indeed discovered, while they were not. This led to three conclusions:

The repetition of tentative news stories, even if they are subsequently disconfirmed, can assist in the creation of false memories in a substantial proportion of people.
Once information is published, its subsequent correction does not alter people’s beliefs unless they are suspicious about the motives underlying the events the news stories are about.
When people ignore corrections, they do so irrespective of how certain they are that the corrections occurred.
A poll conducted between June and September 2003 asked people whether they thought evidence of WMD had been discovered in Iraq since the war ended. They were also asked which media sources they relied upon. Those who obtained their news primarily from Fox News were three times as likely to believe that evidence of WMD had been discovered in Iraq than those who relied on PBS and NPR for their news, and one third more likely than those who primarily watched CBS.
In 2006 Fox News reported the claims of two Republican lawmakers that WMDs had been found in Iraq,[57] based upon unclassified portions of a report by the National Ground Intelligence Center. Quoting from the report, Senator Rick Santorum said “Since 2003, coalition forces have recovered approximately 500 weapons munitions which contain degraded mustard or sarin nerve agent”. According to David Kay, who appeared before the U.S. House Armed Services Committee to discuss these badly corroded munitions, they were leftovers, many years old, improperly stored or destroyed by the Iraqis.[58] Charles Duelfer agreed, stating on NPR’s Talk of the Nation: “When I was running the ISG – the Iraq Survey Group – we had a couple of them that had been turned in to these IEDs, the improvised explosive devices. But they are local hazards. They are not a major, you know, weapon of mass destruction.”[59]

Later, wikileaks would show that WMDs of these kinds continued to be found as the Iraqi occupation continued.[60]

Many news agencies, including Fox News, reported the conclusions of the CIA that, based upon the investigation of the Iraq Survey Group, WMDs are yet to be found in Iraq.
Awareness and opinions of WMD have varied during the course of their history. Their threat is a source of unease, security, and pride to different people. The anti-WMD movement is embodied most in nuclear disarmament, and led to the formation of the British Campaign for Nuclear Disarmament in 1957.

In order to increase awareness of all kinds of WMD, in 2004 the nuclear physicist and Nobel Peace Prize winner Joseph Rotblat inspired the creation of The WMD Awareness Programme[63] to provide trustworthy and up to date information on WMD worldwide.

In 1998 University of New Mexico’s Institute for Public Policy released their third report[64] on U.S. perceptions – including the general public, politicians and scientists – of nuclear weapons since the breakup of the Soviet Union. Risks of nuclear conflict, proliferation, and terrorism were seen as substantial.

While maintenance of the U.S. nuclear arsenal was considered above average in importance, there was widespread support for a reduction in the stockpile, and very little support for developing and testing new nuclear weapons.

Also in 1998, but after the UNM survey was conducted, nuclear weapons became an issue in India’s election of March,[65] in relation to political tensions with neighboring Pakistan. Prior to the election the Bharatiya Janata Party (BJP) announced it would “declare India a nuclear weapon state” after coming to power.

BJP won the elections, and on 14 May, three days after India tested nuclear weapons for the second time, a public opinion poll reported that a majority of Indians favored the country’s nuclear build-up.[citation needed]

On 15 April 2004, the Program on International Policy Attitudes (PIPA) reported[66] that U.S. citizens showed high levels of concern regarding WMD, and that preventing the spread of nuclear weapons should be “a very important U.S. foreign policy goal”, accomplished through multilateral arms control rather than the use of military threats.

A majority also believed the United States should be more forthcoming with its biological research and its Nuclear Non-Proliferation Treaty commitment of nuclear arms reduction.

A Russian opinion poll conducted on 5 August 2005 indicated half the population believes new nuclear powers have the right to possess nuclear weapons.[67] 39% believes the Russian stockpile should be reduced, though not fully eliminated.
Weapons of mass destruction and their related impacts have been a mainstay of popular culture since the beginning of the Cold War, as both political commentary and humorous outlet. The actual phrase “weapons of mass destruction” has been used similarly and as a way to characterise any powerful force or product since the Iraqi weapons crisis in the lead up to the Coalition invasion of Iraq in 2003.
Weapons of mass destruction and their related impacts have been a mainstay of popular culture since the beginning of the Cold War, as both political commentary and humorous outlet. The actual phrase “weapons of mass destruction” has been used similarly and as a way to characterise any powerful force or product since the Iraqi weapons crisis in the lead up to the Coalition invasion of Iraq in 2003.
The international radioactivity symbol (also known as trefoil) first appeared in 1946, at the University of California, Berkeley Radiation Laboratory. At the time, it was rendered as magenta, and was set on a blue background.[68]

It is drawn with a central circle of radius R, the blades having an internal radius of 1.5R and an external radius of 5R, and separated from each other by 60°.[69] It is meant to represent a radiating atom.

The International Atomic Energy Agency found that the trefoil radiation symbol is unintuitive and can be variously interpreted by those uneducated in its meaning; therefore, its role as a hazard warning was compromised as it did not clearly indicate “danger” to many non-Westerners and children who encountered it. As a result of research, a new radiation hazard symbol was developed in 2007 to be placed near the most dangerous parts of radiation sources featuring a skull, someone running away, and using a red rather than yellow background.[70]

The red background is intended to convey urgent danger, and the sign is intended to be used on equipment where very strong radiation fields can be encountered if the device is dismantled or otherwise tampered with. The intended use of the sign is not in a place where the normal user will see it, but in a place where it will be seen by someone who has started to dismantle a radiation-emitting device or equipment. The aim of the sign is to warn people such as scrap metal workers to stop work and leave the area.[71]

Biological weaponry/hazard symbol Edit
Developed by Dow Chemical company in the 1960s for their containment products.[72]

According to Charles Dullin, an environmental-health engineer who contributed to its development:[69]

We wanted something that was memorable but meaningless, so we could educate people as to what it means.

Rate This Content
What do you understand by Emergency sanitation ????
May 21, 2017
0

Emergency sanitation is the management and technical processes required to provide access to sanitation in emergency situations such as after natural disasters and during relief operations for refugees and Internally Displaced Persons (IDPs). There are three phases: Immediate, short term and long term. In the immediate phase, the focus is on managing open defecation, and toilet technologies might include very basic latrines, pit latrines, bucket toilets, container-based toilets, chemical toilets.

Providing handwashing facilities and management of fecal sludge are also part of emergency sanitation.
The term “Emergency” is perceived differently by different people and organisations. In a general sense, an emergency may be considered to be a phenomenon originating from a man-made and/or natural disaster which posess a serious, usually sudden threat to the health or welbeing of the affected community which relies on external assistance to easily cope up with the situation.[1]

There are different categories of emergency depending on its time frame, whether it lasts for few weeks, several months or years.[1]

The number of people who are and will be affected by catastrophes (human crisis and natural disasters), which are increasing in magnitude and frequency, is rapidly increasing. The affected people are subjected to such dangers as temporary homelessness and risks to life and health.[2]
To address the problem of public health and the spread of dangerous diseases that come as a result of lack of sanitation and open defecation, humanitarian actors focus on the construction of, for example, pit latrines and the implementation of hygiene promotion programmes.[3]

The supply of drinking water in an urban-setting emergency has been improved by the introduction of standardised, rapid deployment kits.

In the immediate emergency phase, the focus is on managing open defecation, and toilet technologies might include very basic latrines, pit latrines, bucket toilets, container-based toilets, chemical toilets. The short term phase might also involve technologies such as urine-diverting dry toilets, septic tanks, decentralized wastewater systems.
The provision of sanitation programmes is usually more challenging than water supply as it provides a limited choice of technologies.[3][4] This is exacerbated by the overwhelming and diverse needs of WASH.[4]

Challenges with excreta disposal in emergencies include:

Building Latrines in areas where pits cannot be dug, desludging latrines, no-toilet options and the final treatment or disposal of the fecal sludge.[5]
Weak community participation and finding hygiene promotion designs that are suitable for a given context to make the WASH interventions sustainable.
Newly arriving IDP or refugee populations can usually only be settled in less than ideal ares, such as land that is prone to regular flooding or which is very dry and with rocky ground.[citation needed] This makes the provision of safe sanitation facilities and other infrastructure very difficult.
In long running emergencies, the safe decommissioning or desludging of previously quickly built sanitation facilities can also become a serious challenge.[citation needed]
Humanitarian actors need to understand the importance of better preparation and resilience and the need for exit strategies and have consideration on the environment.
Civil defense, civil defence (see spelling differences) or civil protection is an effort to protect the citizens of a state (generally non-combatants) from military attacks and natural disasters. It uses the principles of emergency operations: prevention, mitigation, preparation, response, or emergency evacuation and recovery. Programs of this sort were initially discussed at least as early as the 1920s and were implemented in some countries during the 1930s as the threat of war and aerial bombardment grew. It became widespread after the threat of nuclear weapons was realized.

Since the end of the Cold War, the focus of civil defense has largely shifted from military attack to emergencies and disasters in general. The new concept is described by a number of terms, each of which has its own specific shade of meaning, such as crisis management, emergency management, emergency preparedness, contingency planning, emergency services, and civil protection.

In some countries, civil defense is seen as a key part of “total defense”. For example, in Sweden, the Swedish word totalförsvar refers to the commitment of a wide range of resources of the nation to its defense – including to civil protection. Respectively, some countries (notably the Soviet Union) may have or have had military-organized civil defense units (Civil Defense Troops) as part of their armed forces or as a paramilitary service.

Rate This Content
Have you heard about Act of God
May 21, 2017
0

In legal usage throughout the English–speaking world, an Act of God[1] is a natural disaster outside human control, such as an earthquake or tsunami, for which no person can be held responsible. An Act of God may amount to an exception to liability in contracts (as under the Hague-Visby Rules);[2] or it may be an “insured peril” in an insurance policy.[3]

By contrast, other extraordinary man-made or political events are deemed force majeure.
In the law of contracts, an Act of God may be interpreted as an implied defense under the rule of impossibility or impracticability. If so, the promise is discharged because of unforeseen occurrences, which were unavoidable and would result in insurmountable delay, expense, or other material breach.

Under the English common law, contractual obligations were deemed sacrosanct, so failure to honour a contract could lead to an order for specific performance or internment in a debtor’s prison. In 1863, this harsh rule was softened by the case of Taylor v Caldwell which introduced the doctrine of frustration of contract, which provided that “where a contract becomes impossible to perform and neither party is at fault, both parties may be excused their obligations”. In this case, a music hall was burned down by Act of God before a contract of hire could be fulfilled, and the court deemed the contract frustrated.

In other contracts, such as indemnification, an act of God may be no excuse, and in fact may be the central risk assumed by the promisor—e.g., flood insurance or crop insurance—the only variables being the timing and extent of the damage. In many cases, failure by way of ignoring obvious risks due to “natural phenomena” will not be sufficient to excuse performance of the obligation, even if the events are relatively rare: e.g., the year 2000 problem in computers. Under the Uniform Commercial Code, 2-615, failure to deliver goods sold may be excused by an “act of God” if the absence of such act was a “basic assumption” of the contract, and the act has made the delivery “commercially impracticable”.

Recently, human activities have been claimed to be the root causes of some events until now considered natural disasters. In particular:

water pressure in dams releasing a geological fault (earthquake in China)[5]
geothermal injections of water provoking earthquakes (Basel, Switzerland, 2003)[6]
drilling provoking mud volcano (Java, 2008)[7]
Such events are possibly threatening the legal status of acts of God and may establish liabilities where none existed until now.
UK – England and Wales Edit
An act of God is an unforeseeable natural phenomenon. Explained by Lord Hobhouse in Transco plc v Stockport Metropolitan Borough Council as describing an event;

(i) which involve no human agency
(ii) which is not realistically possible to guard against
(iii) which is due directly and exclusively to natural causes and
(iv) which could not have been prevented by any amount of foresight, plans, and care.
UK – Scotland Edit
An Act of God is described in Tennant v. Earl of Glasgow (1864 2 M (HL) 22) as: “Circumstances which no human foresight can provide against, and of which human prudence is not bound to recognize the possibility, and which when they do occur, therefore, are calamities that do not involve the obligation of paying for the consequences that may result from them.”

United States Edit
In the law of torts, an act of God may be asserted as a type of intervening cause, the lack of which would have avoided the cause or diminished the result of liability (e.g., but for the earthquake, the old, poorly constructed building would be standing). However, foreseeable results of unforeseeable causes may still raise liability. For example, a bolt of lightning strikes a ship carrying volatile compressed gas, resulting in the expected explosion. Liability may be found if the carrier did not use reasonable care to protect against sparks—regardless of their origins. Similarly, strict liability could defeat a defense for an act of God where the defendant has created the conditions under which any accident would result in harm. For example, a long-haul truck driver takes a shortcut on a back road and the load is lost when the road is destroyed in an unforeseen flood. Other cases find that a common carrier is not liable for the unforeseeable forces of nature. See e.g. Memphis & Charlestown RR Co. v. Reeves, 77 U.S. 176 (1870).

A particularly interesting example is that of “rainmaker” Charles Hatfield, who was hired in 1915 by the city of San Diego to fill the Morena reservoir to capacity with rainwater for $10,000. The region was soon flooded by heavy rains, nearly bursting the reservoir’s dam, killing nearly 20 people, destroying 110 bridges (leaving 2), knocking out telephone and telegraph lines, and causing an estimated $3.5 million in damage in total. When the city refused to pay him (he had forgotten to sign the contract), he sued the city. The floods were ruled an act of God, excluding him from liability but also from payment.
The phrase “act of God” is sometimes used to attribute an event to divine intervention. Often it is used in conjunction with a natural disaster or tragic event. A miracle, by contrast, is often considered a fortuitous event attributed to divine intervention. Some consider it separate from acts of nature and being related to fate or destiny.[8]

Christian theologians differ on their views and interpretations of scripture.[9] R.C. Sproul implies that God causes a disaster when he speaks of Divine Providence: “In a universe governed by God, there are no chance events”.[10] Others indicate that God may allow a tragedy to occur.[11][12]

Others accept unfortunate events as part of life[13] and reference Matthew 5:45 (KJV): “for he maketh his sun to rise on the evil and on the good, and sendeth rain on the just and on the unjust.”

Rate This Content
Some Natural Disasters people experienced
May 21, 2017
0

A natural disaster is a major adverse event resulting from natural processes of the Earth; examples include floods, hurricanes, tornadoes, volcanic eruptions, earthquakes, tsunamis, and other geologic processes. A natural disaster can cause loss of life or property damage,[1] and typically leaves some economic damage in its wake, the severity of which depends on the affected population’s resilience, or ability to recover and also on the infrastructure available.[2]

An adverse event will not rise to the level of a disaster if it occurs in an area without vulnerable population.[3][4] In a vulnerable area, however, such as Nepal during the 2015 earthquake, an earthquake can have disastrous consequences and leave lasting damage, requiring years to repair.
A landslide is described as an outward and downward slope movement of an abundance of slope-forming materials including rock, soil, artificial, or even a combination of these things.[5]

During World War I, an estimated 40,000 to 80,000 soldiers died as a result of avalanches during the mountain campaign in the Alps at the Austrian-Italian front. Many of the avalanches were caused by artillery fire.[6][7]

Earthquakes Edit
See also: Lists of earthquakes
An earthquake is the result of a sudden release of energy in the Earth’s crust that creates seismic waves. At the Earth’s surface earthquakes manifest themselves by vibration, shaking and sometimes displacement of the ground. Earthquakes are caused by slippage within geological faults. The underground point of origin of the earthquake is called the seismic focus. The point directly above the focus on the surface is called the epicenter. Earthquakes by themselves rarely kill people or wildlife. It is usually the secondary events that they trigger such as building collapse, fires, tsunamis (seismic sea waves) and volcanoes. Many of these could possibly be avoided by better construction, safety systems, early warning and planning.
When natural erosion or human mining makes the ground too weak to support the structures built on it, the ground can collapse and produce a sinkhole. For example, the 2010 Guatemala City sinkhole which killed fifteen people was caused when heavy rain from Tropical Storm Agatha, diverted by leaking pipes into a pumice bedrock, led to the sudden collapse of the ground beneath a factory building.
Volcanoes can cause widespread destruction and consequent disaster in several ways. The effects include the volcanic eruption itself that may cause harm following the explosion of the volcano or falling rocks. Second, lava may be produced during the eruption of a volcano, and so as it leaves the volcano the lava destroys many buildings, plants and animals due to its extreme heat . Third, volcanic ash generally meaning the cooled ash – may form a cloud, and settle thickly in nearby locations. When mixed with water this forms a concrete-like material. In sufficient quantity ash may cause roofs to collapse under its weight but even small quantities will harm humans if inhaled. Since the ash has the consistency of ground glass it causes abrasion damage to moving parts such as engines. The main killer of humans in the immediate surroundings of a volcanic eruption is the pyroclastic flows, which consist of a cloud of hot volcanic ash which builds up in the air above the volcano and rushes down the slopes when the eruption no longer supports the lifting of the gases. It is believed that Pompeii was destroyed by a pyroclastic flow. A lahar is a volcanic mudflow or landslide. The 1953 Tangiwai disaster was caused by a lahar, as was the 1985 Armero tragedy in which the town of Armero was buried and an estimated 23,000 people were killed.

A specific type of volcano is the supervolcano. According to the Toba catastrophe theory, 75,000 to 80,000 years ago a supervolcanic event at Lake Toba reduced the human population to 10,000 or even 1,000 breeding pairs, creating a bottleneck in human evolution.[8] It also killed three-quarters of all plant life in the northern hemisphere. The main danger from a supervolcano is the immense cloud of ash, which has a disastrous global effect on climate and temperature for many years.
It is a violent, sudden and destructive change either in quality of earth’s water or in distribution or movement of water on land below the surface or in atmosphere.

Floods Edit
See also: List of floods
A flood is an overflow of water that ‘submerges’ land.[9] The EU Floods Directive defines a flood as a temporary covering by water of land which is usually not covered by water.[10] In the sense of ‘flowing water’, the word may also be applied to the inflow of the tides. Flooding may result from the volume of water within a body of water, such as a river or lake, which overflows causing the result that some of the water escapes its usual boundaries.[11] While the size of a lake or other body of water will vary with seasonal changes in precipitation and snow melt, it is not a significant flood unless the water covers land used by man like a village, city or other inhabited area, roads, expanses of farmland, etc.

Limnic eruptions Edit
Main article: Limnic eruption
A limnic eruption occurs when a gas, usually CO2, suddenly erupts from deep lake water, posing the threat of suffocating wildlife, livestock and humans. Such an eruption may also cause tsunamis in the lake as the rising gas displaces water. Scientists believe landslides, volcanic activity, or explosions can trigger such an eruption. To date, only two limnic eruptions have been observed and recorded. In 1984, in Cameroon, a limnic eruption in Lake Monoun caused the deaths of 37 nearby residents, and at nearby Lake Nyos in 1986 a much larger eruption killed between 1,700 and 1,800 people by asphyxiation.

Tsunami Edit
Main article: Tsunami
A tsunami (plural: tsunamis or tsunami; from Japanese: 津波, lit. “harbour wave”; English pronunciation: /tsuːˈnɑːmi/), also known as a seismic sea wave or as a tidal wave, is a series of waves in a water body caused by the displacement of a large volume of water, generally in an ocean or a large lake. Tsunamis can be caused by undersea earthquakes such as the 2004 Boxing Day tsunami, or by landslides such as the one in 1958 at Lituya Bay, Alaska, or by volcanic eruptions such as the ancient eruption of Santorini. On March 11, 2011, a tsunami occurred near Fukushima, Japan and spread through the Pacific.
Blizzards are severe winter storms characterized by heavy snow and strong winds. When high winds stir up snow that has already fallen, it is known as a ground blizzard. Blizzards can impact local economic activities, especially in regions where snowfall is rare. The Great Blizzard of 1888 affected the United States, when many tons of wheat crops were destroyed, and in Asia, 2008 Afghanistan blizzard and the 1972 Iran blizzard were also significant events. The 1993 Superstorm originated in the Gulf of Mexico and traveled north, causing damage in 26 states as well as Canada and leading to more than 300 deaths.[12]

Cyclonic storms Edit
Cyclone, tropical cyclone, hurricane, and typhoon are different names for the same phenomenon, which is a cyclonic storm system that forms over the oceans. The determining factor on which term is used is based on where they originate. In the Atlantic and Northeast Pacific, the term “hurricane” is used; in the Northwest Pacific it is referred to as a “typhoon” and “cyclones” occur in the South Pacific and Indian Ocean.

The deadliest hurricane ever was the 1970 Bhola cyclone; the deadliest Atlantic hurricane was the Great Hurricane of 1780 which devastated Martinique, St. Eustatius and Barbados. Another notable hurricane is Hurricane Katrina, which devastated the Gulf Coast of the United States in 2005.

Droughts Edit
Main article: Drought
Drought is the unusual dryness of soil, resulting in crop failure and shortage of water and for other uses which is caused by significant low rainfall than average over a prolonged period. Hot dry winds, shortage of water, high temperatures and consequent evaporation of moisture from the ground can contribute to conditions of drought.

Well-known historical droughts include the 1997–2009 Millennium Drought in Australia led to a water supply crisis across much of the country. As a result, many desalination plants were built for the first time (see list). In 2011, the State of Texas lived under a drought emergency declaration for the entire calendar year and severe economic losses.[13] The drought caused the Bastrop fires.

Thunderstorms Edit
Main article: Thunderstorm
Severe storms, dust clouds, and volcanic eruptions can generate lightning. Apart from the damage typically associated with storms, such as winds, hail, and flooding, the lightning itself can damage buildings, ignite fires and kill by direct contact. Especially deadly lightning incidents include a 2007 strike in Ushari Dara, a remote mountain village in northwestern Pakistan, that killed 30 people,[14] the crash of LANSA Flight 508 which killed 91, and a fuel explosion in Dronka, Egypt caused by lightning in 1994 which killed 469.[15] Most lightning deaths occur in the poor countries of America and Asia, where lightning is common and adobe mud brick housing provides little protection.[16]
Hailstorms are rain drops that fall as ice, rather than melting before they hit the ground. A particularly damaging hailstorm hit Munich, Germany, on July 12, 1984, causing about $2 billion in insurance claims.

Heat waves Edit
Main article: Heat wave
A heat wave is a period of unusually and excessively hot weather. The worst heat wave in recent history was the European Heat Wave of 2003. A summer heat wave in Victoria, Australia, created conditions which fuelled the massive bushfires in 2009. Melbourne experienced three days in a row of temperatures exceeding 40 °C (104 °F) with some regional areas sweltering through much higher temperatures. The bushfires, collectively known as “Black Saturday”, were partly the act of arsonists. The 2010 Northern Hemisphere summer resulted in severe heat waves, which killed over 2,000 people. It resulted in hundreds of wildfires which causing widespread air pollution, and burned thousands of square miles of forest.
A tornado is a violent and dangerous rotating column of air that is in contact with both the surface of the earth and a cumulonimbus cloud, or the base of a cumulus cloud in rare cases. It is also referred to as a twister or a cyclone,[17] although the word cyclone is used in meteorology in a wider sense, to refer to any closed low pressure circulation. Tornadoes come in many shapes and sizes, but are typically in the form of a visible condensation funnel, whose narrow end touches the earth and is often encircled by a cloud of debris and dust. Most tornadoes have wind speeds less than 110 miles per hour (177 km/h), are approximately 250 feet (80 m) across, and travel a few miles (several kilometers) before dissipating. The most extreme tornadoes can attain wind speeds of more than 300 mph (480 km/h), stretch more than two miles (3 km) across, and stay on the ground for dozens of miles (perhaps more than 100 km).
Wildfires are large fires which often start in wildland areas. Common causes include lightning and drought but wildfires may also be started by human negligence or arson. They can spread to populated areas and can thus be a threat to humans and property, as well as wildlife. Notable cases of wildfires were the 1871 Peshtigo Fire in the United States, which killed at least 1700 people, and the 2009 Victorian bushfires in Australia.

Rate This Content
Some thing you dont know about buildings we live in
May 21, 2017
0

A building or edifice is a structure with a roof and walls standing more or less permanently in one place, such as a house or factory.[1] Buildings come in a variety of sizes, shapes and functions, and have been adapted throughout history for a wide number of factors, from building materials available, to weather conditions, to land prices, ground conditions, specific uses and aesthetic reasons. To better understand the term building compare the list of nonbuilding structures.

Buildings serve several needs of society – primarily as shelter from weather, security, living space, privacy, to store belongings, and to comfortably live and work. A building as a shelter represents a physical division of the human habitat (a place of comfort and safety) and the outside (a place that at times may be harsh and harmful).

Ever since the first cave paintings, buildings have also become objects or canvasses of much artistic expression. In recent years, interest in sustainable planning and building practices has also become an intentional part of the design process of many new buildings.
The word building is both a noun and a verb also an adverb: the structure itself and the act of making it. As a noun, a building is ‘a structure that has a roof and walls and stands more or less permanently in one place’;[1] “there was a three-storey building on the corner”; “it was an imposing edifice”. In the broadest interpretation a fence or wall is a building.[2] However, the word structure is used more broadly than building including natural and man-made formations[3] and does not necessarily have walls. Structure is more likely to be used for a fence. Sturgis’ Dictionary included that “[building] differs from architecture in excluding all idea of artistic treatment; and it differs from construction in the idea of excluding scientific or highly skilful treatment.”[4] As a verb, building is the act of construction.

Structural height in technical usage is the height to the highest architectural detail on building from street-level. Depending on how they are classified, spires and masts may or may not be included in this height. Spires and masts used as antennas are not generally included. The definition of a low-rise vs. a high-rise building is a matter of debate, but generally three storeys or less is considered low-rise.[5]
A report by Shinichi Fujimura of a shelter built 500 000 years ago[6] is doubtful since Fujimura was later found to have faked many of his findings.[7] Supposed remains of huts found at the Terra Amata site in Nice purportedly dating from 200 000 to 400 000 years ago[8] have also been called into question. (See Terra Amata.) There is clear evidence of homebuilding from around 18 000 BC.[9] Buildings became common during the Neolithic (see Neolithic architecture).
Single-family residential buildings are most often called houses or homes. Residential buildings containing more than one dwelling unit are called a duplex, apartment building to differentiate them from ‘individual’ houses. A condominium is an apartment that the occupant owns rather than rents. Houses may also be built in pairs (semi-detached), in terraces where all but two of the houses have others either side; apartments may be built round courtyards or as rectangular blocks surrounded by a piece of ground of varying sizes. Houses which were built as a single dwelling may later be divided into apartments or bedsitters; they may also be converted to another use e.g. an office or a shop.

Building types may range from huts to multimillion-dollar high-rise apartment blocks able to house thousands of people. Increasing settlement density in buildings (and smaller distances between buildings) is usually a response to high ground prices resulting from many people wanting to live close to work or similar attractors. Other common building materials are brick, concrete or combinations of either of these with stone.

Residential buildings have different names for their use depending if they are seasonal include holiday cottage (vacation home) or timeshare; size such as a cottage or great house; value such as a shack or mansion; manner of construction such as a log home or mobile home; proximity to the ground such as earth sheltered house, stilt house, or tree house. Also if the residents are in need of special care such as a nursing home, orphanage or prison; or in group housing like barracks or dormitories.

Historically many people lived in communal buildings called longhouses, smaller dwellings called pit-houses and houses combined with barns sometimes called housebarns.

Buildings are defined to be substantial, permanent structures so other dwelling forms such as houseboats, yurts, and motorhomes are dwellings but not buildings.

Multi-storey Edit
A multi-storey is a building that has multiple floors. Sydney is a city with many multi story buildings: One suburb which has been notorious for poor construction is Lane Cove. Many overseas investors have been sucked in a and bought poorly built buildings

Complex Edit
Sometimes a group of inter-related (and possibly inter-connected) builds are referred to as a complex – for example a housing complex,[10] educational complex,[11] hospital complex, etc.
The practice of designing, constructing, and operating buildings is most usually a collective effort of different groups of professionals and trades. Depending on the size, complexity, and purpose of a particular building project, the project team may include:

A real estate developer who secures funding for the project;
One or more financial institutions or other investors that provide the funding
Local planning and code authorities
A Surveyor who performs an ALTA/ACSM and construction surveys throughout the project;
Construction managers who coordinate the effort of different groups of project participants;
Licensed architects and engineers who provide building design and prepare construction documents;
The principal design Engineering disciplines which would normally include the following professionals:- Civil, Structural, Mechanical building services or HVAC (heating Ventilation and Air Conditioning) Electrical Building Services, Plumbing and drainage. Also other possible design Engineer specialists may be involved such as Fire (prevention), Acoustic, facade engineers,building physics,Telecomms, AV (Audio Visual), BMS (Building Management Systems)Automatic controls etc. These design Engineers also prepare construction documents which are issued to specialist contractors to obtain a price for the works and to follow for the installations.
Landscape architects;
Interior designers;
Other consultants;
Contractors who provide construction services and install building systems such as climate control, electrical, plumbing, Decoration, fire protection, security and telecommunications;
Marketing or leasing agents;
Facility managers who are responsible for operating the building.
Regardless of their size or intended use, all buildings in the US must comply with zoning ordinances, building codes and other regulations such as fire codes, life safety codes and related standards.

Vehicles—such as trailers, caravans, ships and passenger aircraft—are treated as “buildings” for life safety purposes.
Any building requires a certain amount of internal infrastructure to function, which includes such elements like heating / cooling, power and telecommunications, water and wastewater etc. Especially in commercial buildings (such as offices or factories), these can be extremely intricate systems taking up large amounts of space (sometimes located in separate areas or double floors / false ceilings) and constitute a big part of the regular maintenance required.

Conveying systems Edit
Systems for transport of people within buildings:

Elevator
Escalator
Moving sidewalk (horizontal and inclined)
Systems for transport of people between interconnected buildings:

Skyway
Underground city
Buildings may be damaged during the construction of the building or during maintenance. There are several other reasons behind building damage like accidents[12] such as storms, explosions and subsidence caused by mining or poor foundations. Buildings also may suffer from fire damage and flooding in special circumstances. They may also become dilapidated through lack of proper maintenance or alteration work improperly carried out.

Rate This Content