MaryAnn Bernal's Blog, page 67
February 7, 2017
History’s most surprising statistics
History Extra
Illustrations by James Albon.
4: The number of years' wages that a pound of wool – twice dyed in best quality Tyrian purple – would cost a Roman soldier during the first century AD
Since c1500 BC, purple – a dye produced from the gland secretions of types of shellfish – was the colour of kings, priests, magistrates and emperors, with the highest quality dye originating in Tyre, in ancient Phoenicia (now modern Lebanon).
Its cost was phenomenal. In the first century AD, a pound of wool, twice dyed in best quality Tyrian purple, cost around 1,000 denarii – more than four times the annual wage of a Roman soldier. The AD 301 Edict of Diocletian (also know as the Edict of Maximum Prices), which attempted to control runaway inflation in the empire, lists the most expensive dyed silk as costing 150,000 denarii per pound! Meanwhile the, admittedly satirical, poet Martial claimed that a praetor’s purple cloak actually cost 100 times more than a soldier’s pay.
The reasons behind the astronomical cost lie in the obtaining of the dye itself. This procedure involved a lengthy process of fishing – using wicker traps primed with bait – followed by the extraction of minute quantities of the dye by a long, laborious and smelly process from thousands of shellfish. Pliny the Elder explained the process and gave production statistics which indicate the vast number of shells required.
Pliny stated that if a mollusc gland weighed a gram (in modern weights), more than 3.5m molluscs would produce just 500 pounds of dye. Pliny the Elder was not exaggerating. In modern times, Tyrian purple has been recreated, at great expense. When the German chemist Paul Friedander tried to recreate the colour in 1909, he needed 12,000 molluscs to produce just 1.4 ounces of dye, enough to colour a handkerchief. In 2000, a gram of Tyrian purple, made from 10,000 molluscs according to the original formula, cost 2,000 euros. Peter Jones is author of Veni, Vidi, Vici (Atlantic Books, 2013)
10 million: The number of fleeces exported annually from England by c1300
England has often been referred to as the Australia of the Middle Ages, a reference to its booming wool trade (something that Australia experienced in the 19th century). By the 14th century, English farmers had developed breeds of sheep that produced fleeces of varying weight and quality, some of which were among the best in Europe.
English wool was widely sought after by the cloth-makers of Flanders and Italy who needed fine wool to produce the rich scarlet cloths worn by kings, nobles and bishops. The 14th century had seen a huge growth in the cloth trade, particularly in Ypres, Ghent and Bruges.
To keep up with the high demand, English wool producers expanded their flocks, often going to great trouble to keep them from harm. Many kept their sheep on hill pastures during the summer, moving them to sheltered valleys in the winter. Others built sheep houses or sheepcotes where the animals could shelter in the worst weather and where food, such as peas in straw, was kept.
It is often assumed that monasteries such as Fountains Abbey in North Yorkshire, which kept thousands of sheep, met Europe’s increasing demand for wool, but in fact the combined flocks of peasants, each of whom kept 30–50 animals, outnumbered those of the great estates. To gather the fleeces of these scattered flocks needed organisation – a role that was filled by entrepreneurs, woolmen or woolmongers who bought the wool and sent it to the ports. Some of the big producers – monasteries and lay landlords – often acted as middlemen, collecting the local peasant wool and sending it with their own.
Finances, too, were complicated, and there was much use of credit during the period. An Italian or Flemish merchant would often advance money to a producer, such as a monastery, on the condition that he would buy their wool, sometimes quite cheaply. These contracts usually stretched into the future, so that a monastery might have sold its wool four years in advance.
Chris Dyer is emeritus professor of regional and local history at the University of Leicester
25: The percentage of English men believed to have served in arms for king or parliament at one time or another during the Civil War
The Civil War of the 17th century saw huge numbers of men leave their towns and villages to go and fight, as England, Scotland and Ireland were torn apart by the bitter conflict between the crown and parliament. The historian Charles Carlton has calculated that, proportionately, more of the English population died in the Civil War than in the First World War, and some 25 per cent of English men are thought to have served in arms for king or parliament at one time or another.
The village of Myddle in Shropshire is the only parish in England for which we know exactly how many people went to war. This is thanks to the writings of yeoman Richard Gough, whose History of Myddle, written between 1700 and 1706, tells us that “out of these three towns – that’s to say the hamlets of Myddle parish – of Myddle, Marton and Newton, there went no less than 20 men, of which number 13 were killed in the wars...”
Gough then proceeds to name the Myddle men who went to fight, along with their occupations and whether they lived or died. “Richard Chalenor of Myddle”, he writes, “being a big lad went to Shrewsbury and there listed, and went to Edgehill Fight which was on October 23rd 1642, and was never heard of afterwards in this country...”
The experience of Myddle in the Civil War is by no means unique: it is remarkable simply for the information recorded by Gough. What’s more, his description of one John Mould – who “was shot through the leg with a musket bullet which broke the master bone of his leg” so that it remained “very crooked as long as he lived” – reminds us that, just as in modern wars, huge numbers of men returned to their daily lives physically scarred by the events of the Civil War.
In the wake of the conflict, parliament, which was now in power, provided pensions for wounded parliamentarian soldiers, but offered nothing for those who had fought for the king. In 1660, however, when the monarchy was restored in the form of Charles II, the situation was turned on its head and injured royalists received financial help. Others had to rely on the assistance of their charitable neighbours.
Gough’s writings give historians a wonderful insight into the lives of ordinary soldiers in an era that is so often recorded by the gentry alone. And, to quote Gough himself, who was a young boy during the Civil War: “If so many died out of these three [hamlets], we may reasonably guess that many thousands died in England in that war.” Gough’s History of Myddle is a fitting tribute to those men.
Professor Mark Stoyle is author of The Black Legend of Prince Rupert’s Dog: Witchcraft and Propaganda during the English Civil War (Exeter, 2011)
6: The life expectancy in weeks for newly arrived horses in South Africa during the Anglo-Boer War
Horses played an essential role in the Anglo-Boer War (1899–1902), but paid a terrible price: of the 518,704 horses and 150,781 mules and donkeys sent to British forces in South Africa during the conflict, around two thirds (347,007 horses, 53,339 mules and donkeys) never made it home.
At the start of the war, British units travelled from a northern hemisphere winter to a South African summer, meaning that cavalry horses still had their winter coats and suffered severely from the heat. What’s more, the animals endured a long sea voyage of up to six weeks before they even reached South Africa. On arrival, horses were often given no time to recover from the voyage or acclimatise to South African conditions; instead they were rushed into action right away. What’s more, some 13,144 horses and 2,816 mules and donkeys were lost on the outward voyage.
The constant demand for fresh animals meant that additional horses had to be imported but, in contrast to the ponies of the Boers, these imported horses could not eat South African foliage. It proved almost impossible to provide enough food for the animals, especially as Boer guerrillas constantly attacked British supply lines.
After the war, cavalry officer Michael Rimington recalled that the process of bringing animals to the front was “thirty days’ voyage, followed by a five or six days’ railway journey, then semi-starvation at the end of a line of communication, then some quick work followed by two or three days’ total starvation, then more work...”. Ignorance in horse care did not help either: one newly arrived soldier asked Rimington whether he should feed his horse beef or mutton, and the animals were often ridden until they simply collapsed. Little surprise, then, that the average life expectancy of a newly arrived horse in South Africa was just six weeks.
Dr Spencer Jones is author of Stemming the Tide: Officers and Leadership in the British Expeditionary Force 1914 (Helion & Co, 2013)
$1,000: The price per ounce that the US government was paying for penicillin in 1943
In 1940, a team of scientists, led by pharmacologist Howard Florey, discovered the means of extracting penicillin from the very dilute solution produced by penicillium mould. After proving that the substance could cure infections in mice, the Oxford team tested penicillin on human patients – with remarkable results.
But despite taking a small sample of the mould to America and discussing production methods with the US government laboratory and several US companies, by 1943, penicillin was being produced at scarcely more than the laboratory scale previously seen at Oxford.
After testing the substance on patients, the US government purchased penicillin from its manufacturers at a price of $200 for a million units. This was equivalent to $1,000 an ounce at a time when gold cost just $35 an ounce.
The big breakthrough for the drug came with developments in manufacturing techniques, which saw pharmaceutical companies such as Pfizer producing penicillin on a massive scale in huge vats. This meant that a single tank of 10,000 gallons could produce the equivalent amount of penicillin as would fill 60,000–70,000 two-litre bottles. The impact of this engineering triumph was intensified by the discovery in 1943 of a new strain of penicillium mould that was much more suitable for growing in the deep vats than the original British strain. This new strain was first found on a melon in Peoria, Illinois, by a technician who later came to be known as Moldy Mary.
By 1945, the American pharmaceutical company Merck was selling penicillin at $6,000 per billion units at a time when penicillin in Europe was still scarce. Three years later, the price had halved and Procaine penicillin, which was metabolised more slowly (meaning fewer injections), had been introduced.
Although two large processing plants were built in Britain after the Second World War, demand for penicillin was so great and so unexpected that its cost – and that of other new drugs including streptomycin and cortisone – forced the new NHS to charge for medicines.
Robert Bud is keeper of science and medicine at the Science Museum, London
17: The number of women candidates who stood for election to parliament in 1918
Thousands of women during the Edwardian era became politicised during the campaign for the parliamentary vote, so at first glance it may seem surprising that only 17 women stood for election in 1918 – the first in which women could participate in the representative process, both as voters and as parliamentary candidates.
The Representation of the People Act, which received Royal Assent on 6 February 1918, was unclear as to whether women could stand as parliamentary candidates and opinions on the issue were divided. When the coalition government rushed through the Parliament (Qualification of Women) Bill, which became law on 21 November 1918, a general election for 14 December had already been announced, with 4 December given as the date when nominations for parliamentary candidates had to be received. This gave women who wished to stand for election just three weeks in which to find a seat, enter a nomination, choose an election agent, draw up election policy, secure the support of unpaid helpers, raise funds, organise meetings and publicity – and, perhaps most importantly of all, decide whether they would stand as an independent or seek the nomination of one of the main, male-oriented political parties of the day: Conservative, Liberal or Labour.
Of the 17 women who stood as parliamentary candidates contesting 706 seats, only nine were adopted by the three main political parties. Christabel Pankhurst was the most well-known, but she stood for the Women’s Party, an organisation that she and her mother had founded in 1917. Christabel was the only woman candidate to receive the support of the coalition government, but lost out to her Labour rival by just 775 votes.
Only one woman was elected to parliament in 1918 – Constance Markievicz. But, as a member of Sinn Fein, she refused to swear allegiance to the British crown and never took her seat in the Commons.
June Purvis is professor of women’s and gender history at the University of Portsmouth
500,000: The estimated number of German civilian deaths from strategic bombing during the Second World War
The Blitz was the biggest thing to happen to Britain during the Second World War, and in many ways has come to define the whole of Britain’s experience of war on the home front. But what many people tend to overlook is that, inflicting 50,000 deaths, strategic bombings on Britain by German aircraft killed around a tenth of the number of those who died in similar attacks on Germany. Many of these attacks were carried out by Britain’s Bomber Command, which itself lost some 50,000 crew in the conflict.
The story of Britain during the Second World War needs to be less fixated on the Blitz, and recognise that Britain was itself the perpetrator of far heavier bombing raids on Germany. This was not an aberration, or a response to the Blitz, but rather a long-standing policy of the British state to use machines to wreck the German war economy.
David Edgerton is professor of modern British history at King’s College London
1,138: The number of London children recorded as dying of “teeth” in 1685
This statistic is taken from a 1685 London Bill of Mortality, which listed causes of death in London parishes. Poor women called ‘searchers’ were responsible for collecting the data; they were paid small sums to knock on doors to find out causes of death. Searchers were widely feared because they were associated with infection.
The diseases listed are bizarre: they include things like “frighted”, “suddenly” and “teeth”. The latter was short for “the breeding of teeth” – or teething as we would know it today. It was considered a major cause of infant disease and death in the early modern period: in 1664 the physician J.S. declared that teething “is alwayes dangerous by reason of the grievous Symptomes it produces, as Convulsions, Feavers, and other evils”.
But how did teething cause disease? It was believed that living beings were made up of special substances called humours, which contained different amounts of heat and moisture. When the humours were balanced, the body was healthy, but when they became imbalanced, disease resulted. Teething was dangerous because it caused “sharp Pain like the pricking of needles”, which in turn generated “great heat”, and heat brought diseases caused by hot humours, such as fevers. In childhood, bodies were especially warm; ageing was deemed a cooling process. Thus, any extra warmth in children was believed to spell trouble health-wise.
Doctors and parents went to great lengths to mitigate the hazards of teething. The most popular treatment was to “annoint the gummes with the braynes of a hare”. The midwifery expert François Mauriceau suggested giving children “a little stick of Liquorish to chomp on”, or “a Silver Coral, furnish’d with small Bells”, to “divert the Child from the Pain”. More extreme measures included cutting the gums with a lancet, or hanging a “Viper’s Tooth about the child’s Neck”, which by a “certayne hidden propertie, have vertue to ease the payne”.
Dr Hannah Newton is author of The Sick Child in Early Modern England, 1580–1720 (Oxford University Press, 2012)

Illustrations by James Albon.
4: The number of years' wages that a pound of wool – twice dyed in best quality Tyrian purple – would cost a Roman soldier during the first century AD
Since c1500 BC, purple – a dye produced from the gland secretions of types of shellfish – was the colour of kings, priests, magistrates and emperors, with the highest quality dye originating in Tyre, in ancient Phoenicia (now modern Lebanon).
Its cost was phenomenal. In the first century AD, a pound of wool, twice dyed in best quality Tyrian purple, cost around 1,000 denarii – more than four times the annual wage of a Roman soldier. The AD 301 Edict of Diocletian (also know as the Edict of Maximum Prices), which attempted to control runaway inflation in the empire, lists the most expensive dyed silk as costing 150,000 denarii per pound! Meanwhile the, admittedly satirical, poet Martial claimed that a praetor’s purple cloak actually cost 100 times more than a soldier’s pay.
The reasons behind the astronomical cost lie in the obtaining of the dye itself. This procedure involved a lengthy process of fishing – using wicker traps primed with bait – followed by the extraction of minute quantities of the dye by a long, laborious and smelly process from thousands of shellfish. Pliny the Elder explained the process and gave production statistics which indicate the vast number of shells required.
Pliny stated that if a mollusc gland weighed a gram (in modern weights), more than 3.5m molluscs would produce just 500 pounds of dye. Pliny the Elder was not exaggerating. In modern times, Tyrian purple has been recreated, at great expense. When the German chemist Paul Friedander tried to recreate the colour in 1909, he needed 12,000 molluscs to produce just 1.4 ounces of dye, enough to colour a handkerchief. In 2000, a gram of Tyrian purple, made from 10,000 molluscs according to the original formula, cost 2,000 euros. Peter Jones is author of Veni, Vidi, Vici (Atlantic Books, 2013)
10 million: The number of fleeces exported annually from England by c1300
England has often been referred to as the Australia of the Middle Ages, a reference to its booming wool trade (something that Australia experienced in the 19th century). By the 14th century, English farmers had developed breeds of sheep that produced fleeces of varying weight and quality, some of which were among the best in Europe.
English wool was widely sought after by the cloth-makers of Flanders and Italy who needed fine wool to produce the rich scarlet cloths worn by kings, nobles and bishops. The 14th century had seen a huge growth in the cloth trade, particularly in Ypres, Ghent and Bruges.
To keep up with the high demand, English wool producers expanded their flocks, often going to great trouble to keep them from harm. Many kept their sheep on hill pastures during the summer, moving them to sheltered valleys in the winter. Others built sheep houses or sheepcotes where the animals could shelter in the worst weather and where food, such as peas in straw, was kept.
It is often assumed that monasteries such as Fountains Abbey in North Yorkshire, which kept thousands of sheep, met Europe’s increasing demand for wool, but in fact the combined flocks of peasants, each of whom kept 30–50 animals, outnumbered those of the great estates. To gather the fleeces of these scattered flocks needed organisation – a role that was filled by entrepreneurs, woolmen or woolmongers who bought the wool and sent it to the ports. Some of the big producers – monasteries and lay landlords – often acted as middlemen, collecting the local peasant wool and sending it with their own.
Finances, too, were complicated, and there was much use of credit during the period. An Italian or Flemish merchant would often advance money to a producer, such as a monastery, on the condition that he would buy their wool, sometimes quite cheaply. These contracts usually stretched into the future, so that a monastery might have sold its wool four years in advance.
Chris Dyer is emeritus professor of regional and local history at the University of Leicester
25: The percentage of English men believed to have served in arms for king or parliament at one time or another during the Civil War
The Civil War of the 17th century saw huge numbers of men leave their towns and villages to go and fight, as England, Scotland and Ireland were torn apart by the bitter conflict between the crown and parliament. The historian Charles Carlton has calculated that, proportionately, more of the English population died in the Civil War than in the First World War, and some 25 per cent of English men are thought to have served in arms for king or parliament at one time or another.
The village of Myddle in Shropshire is the only parish in England for which we know exactly how many people went to war. This is thanks to the writings of yeoman Richard Gough, whose History of Myddle, written between 1700 and 1706, tells us that “out of these three towns – that’s to say the hamlets of Myddle parish – of Myddle, Marton and Newton, there went no less than 20 men, of which number 13 were killed in the wars...”
Gough then proceeds to name the Myddle men who went to fight, along with their occupations and whether they lived or died. “Richard Chalenor of Myddle”, he writes, “being a big lad went to Shrewsbury and there listed, and went to Edgehill Fight which was on October 23rd 1642, and was never heard of afterwards in this country...”
The experience of Myddle in the Civil War is by no means unique: it is remarkable simply for the information recorded by Gough. What’s more, his description of one John Mould – who “was shot through the leg with a musket bullet which broke the master bone of his leg” so that it remained “very crooked as long as he lived” – reminds us that, just as in modern wars, huge numbers of men returned to their daily lives physically scarred by the events of the Civil War.
In the wake of the conflict, parliament, which was now in power, provided pensions for wounded parliamentarian soldiers, but offered nothing for those who had fought for the king. In 1660, however, when the monarchy was restored in the form of Charles II, the situation was turned on its head and injured royalists received financial help. Others had to rely on the assistance of their charitable neighbours.
Gough’s writings give historians a wonderful insight into the lives of ordinary soldiers in an era that is so often recorded by the gentry alone. And, to quote Gough himself, who was a young boy during the Civil War: “If so many died out of these three [hamlets], we may reasonably guess that many thousands died in England in that war.” Gough’s History of Myddle is a fitting tribute to those men.
Professor Mark Stoyle is author of The Black Legend of Prince Rupert’s Dog: Witchcraft and Propaganda during the English Civil War (Exeter, 2011)
6: The life expectancy in weeks for newly arrived horses in South Africa during the Anglo-Boer War
Horses played an essential role in the Anglo-Boer War (1899–1902), but paid a terrible price: of the 518,704 horses and 150,781 mules and donkeys sent to British forces in South Africa during the conflict, around two thirds (347,007 horses, 53,339 mules and donkeys) never made it home.
At the start of the war, British units travelled from a northern hemisphere winter to a South African summer, meaning that cavalry horses still had their winter coats and suffered severely from the heat. What’s more, the animals endured a long sea voyage of up to six weeks before they even reached South Africa. On arrival, horses were often given no time to recover from the voyage or acclimatise to South African conditions; instead they were rushed into action right away. What’s more, some 13,144 horses and 2,816 mules and donkeys were lost on the outward voyage.
The constant demand for fresh animals meant that additional horses had to be imported but, in contrast to the ponies of the Boers, these imported horses could not eat South African foliage. It proved almost impossible to provide enough food for the animals, especially as Boer guerrillas constantly attacked British supply lines.
After the war, cavalry officer Michael Rimington recalled that the process of bringing animals to the front was “thirty days’ voyage, followed by a five or six days’ railway journey, then semi-starvation at the end of a line of communication, then some quick work followed by two or three days’ total starvation, then more work...”. Ignorance in horse care did not help either: one newly arrived soldier asked Rimington whether he should feed his horse beef or mutton, and the animals were often ridden until they simply collapsed. Little surprise, then, that the average life expectancy of a newly arrived horse in South Africa was just six weeks.
Dr Spencer Jones is author of Stemming the Tide: Officers and Leadership in the British Expeditionary Force 1914 (Helion & Co, 2013)
$1,000: The price per ounce that the US government was paying for penicillin in 1943
In 1940, a team of scientists, led by pharmacologist Howard Florey, discovered the means of extracting penicillin from the very dilute solution produced by penicillium mould. After proving that the substance could cure infections in mice, the Oxford team tested penicillin on human patients – with remarkable results.
But despite taking a small sample of the mould to America and discussing production methods with the US government laboratory and several US companies, by 1943, penicillin was being produced at scarcely more than the laboratory scale previously seen at Oxford.
After testing the substance on patients, the US government purchased penicillin from its manufacturers at a price of $200 for a million units. This was equivalent to $1,000 an ounce at a time when gold cost just $35 an ounce.
The big breakthrough for the drug came with developments in manufacturing techniques, which saw pharmaceutical companies such as Pfizer producing penicillin on a massive scale in huge vats. This meant that a single tank of 10,000 gallons could produce the equivalent amount of penicillin as would fill 60,000–70,000 two-litre bottles. The impact of this engineering triumph was intensified by the discovery in 1943 of a new strain of penicillium mould that was much more suitable for growing in the deep vats than the original British strain. This new strain was first found on a melon in Peoria, Illinois, by a technician who later came to be known as Moldy Mary.
By 1945, the American pharmaceutical company Merck was selling penicillin at $6,000 per billion units at a time when penicillin in Europe was still scarce. Three years later, the price had halved and Procaine penicillin, which was metabolised more slowly (meaning fewer injections), had been introduced.
Although two large processing plants were built in Britain after the Second World War, demand for penicillin was so great and so unexpected that its cost – and that of other new drugs including streptomycin and cortisone – forced the new NHS to charge for medicines.
Robert Bud is keeper of science and medicine at the Science Museum, London
17: The number of women candidates who stood for election to parliament in 1918
Thousands of women during the Edwardian era became politicised during the campaign for the parliamentary vote, so at first glance it may seem surprising that only 17 women stood for election in 1918 – the first in which women could participate in the representative process, both as voters and as parliamentary candidates.

The Representation of the People Act, which received Royal Assent on 6 February 1918, was unclear as to whether women could stand as parliamentary candidates and opinions on the issue were divided. When the coalition government rushed through the Parliament (Qualification of Women) Bill, which became law on 21 November 1918, a general election for 14 December had already been announced, with 4 December given as the date when nominations for parliamentary candidates had to be received. This gave women who wished to stand for election just three weeks in which to find a seat, enter a nomination, choose an election agent, draw up election policy, secure the support of unpaid helpers, raise funds, organise meetings and publicity – and, perhaps most importantly of all, decide whether they would stand as an independent or seek the nomination of one of the main, male-oriented political parties of the day: Conservative, Liberal or Labour.
Of the 17 women who stood as parliamentary candidates contesting 706 seats, only nine were adopted by the three main political parties. Christabel Pankhurst was the most well-known, but she stood for the Women’s Party, an organisation that she and her mother had founded in 1917. Christabel was the only woman candidate to receive the support of the coalition government, but lost out to her Labour rival by just 775 votes.
Only one woman was elected to parliament in 1918 – Constance Markievicz. But, as a member of Sinn Fein, she refused to swear allegiance to the British crown and never took her seat in the Commons.
June Purvis is professor of women’s and gender history at the University of Portsmouth
500,000: The estimated number of German civilian deaths from strategic bombing during the Second World War
The Blitz was the biggest thing to happen to Britain during the Second World War, and in many ways has come to define the whole of Britain’s experience of war on the home front. But what many people tend to overlook is that, inflicting 50,000 deaths, strategic bombings on Britain by German aircraft killed around a tenth of the number of those who died in similar attacks on Germany. Many of these attacks were carried out by Britain’s Bomber Command, which itself lost some 50,000 crew in the conflict.
The story of Britain during the Second World War needs to be less fixated on the Blitz, and recognise that Britain was itself the perpetrator of far heavier bombing raids on Germany. This was not an aberration, or a response to the Blitz, but rather a long-standing policy of the British state to use machines to wreck the German war economy.
David Edgerton is professor of modern British history at King’s College London
1,138: The number of London children recorded as dying of “teeth” in 1685
This statistic is taken from a 1685 London Bill of Mortality, which listed causes of death in London parishes. Poor women called ‘searchers’ were responsible for collecting the data; they were paid small sums to knock on doors to find out causes of death. Searchers were widely feared because they were associated with infection.

The diseases listed are bizarre: they include things like “frighted”, “suddenly” and “teeth”. The latter was short for “the breeding of teeth” – or teething as we would know it today. It was considered a major cause of infant disease and death in the early modern period: in 1664 the physician J.S. declared that teething “is alwayes dangerous by reason of the grievous Symptomes it produces, as Convulsions, Feavers, and other evils”.
But how did teething cause disease? It was believed that living beings were made up of special substances called humours, which contained different amounts of heat and moisture. When the humours were balanced, the body was healthy, but when they became imbalanced, disease resulted. Teething was dangerous because it caused “sharp Pain like the pricking of needles”, which in turn generated “great heat”, and heat brought diseases caused by hot humours, such as fevers. In childhood, bodies were especially warm; ageing was deemed a cooling process. Thus, any extra warmth in children was believed to spell trouble health-wise.
Doctors and parents went to great lengths to mitigate the hazards of teething. The most popular treatment was to “annoint the gummes with the braynes of a hare”. The midwifery expert François Mauriceau suggested giving children “a little stick of Liquorish to chomp on”, or “a Silver Coral, furnish’d with small Bells”, to “divert the Child from the Pain”. More extreme measures included cutting the gums with a lancet, or hanging a “Viper’s Tooth about the child’s Neck”, which by a “certayne hidden propertie, have vertue to ease the payne”.
Dr Hannah Newton is author of The Sick Child in Early Modern England, 1580–1720 (Oxford University Press, 2012)
Published on February 07, 2017 02:30
February 6, 2017
5 things you (probably) didn’t know about the Dark Ages
History Extra
The Lindisfarne Gospels, carpet page and incipit, c700-20. (British Library)
1. Why is the period known as ‘dark’?
The term ‘Dark Age’ was used by the Italian scholar and poet Petrarch in the 1330s to describe the decline in later Latin literature following the collapse of the Western Roman empire. In the 20th century, scholars used the term more specifically in relation to the 5th-10th centuries, but now it is largely seen as a derogatory term, concerned with contrasting periods of perceived enlightenment with cultural ignorance. A very quick glance at the remarkable manuscripts, metalwork, texts, buildings and individuals that saturate the early medieval period reveals that ‘Dark Age’ is now very much an out-of-date term. It’s best used as a point of reference against which to show how vibrant the time in fact was.
Gold, garnet and glass shoulder clasps from the Sutton Hoo Ship Burial, c 625AD. (British Museum)
2. It was religiously diverse
The early medieval period was characterised by widespread adherence to Christianity. However, there was a great deal of religious variety, and even the Christian church itself was a diverse and complicated entity. In the north, Scandinavia and parts of Germany adhered to Germanic paganism, with Iceland converting to Christianity in 1000 AD. Folk religious practices continued. Late in the 8th century, an Anglo-Saxon monk called Alcuin questioned why heroic legend still fascinated Christians, asking: “What has Ingeld to do with Christ?” Within the church there were many lines of divisions. For example, Monophysitism divided society and the church, arguing that Jesus had just one nature, rather than two: human and divine, which caused division to the level of emperors, states and nations.
The Franks Casket, carved on whale bone, with runic poetry and showing scenes of the nativity and Weland’s revenge, c700. (British Museum)
3. It was not a time of illiteracy and ignorance
The connection between illiteracy and ignorance is a relatively modern phenomenon. For most of the medieval period and beyond, the majority of information was transmitted orally and retained through memory. Societies such as that of the early Anglo-Saxons could recall everything from land deeds, marital associations and epic poetry. The ‘scop’ or minstrel could recite a single epic over many days, indicating hugely sophisticated mental retention. With the establishment of monasteries, literacy was largely confined within their walls. Yet in places like the holy community at Lindisfarne, the monks were able to create sophisticated theological texts, and extraordinary manuscripts.
Panels from the Ruthwell Cross showing Jesus with Mary Magdalene, and runic passages from ‘The Dream of the Rood’, 8th century, Ruthwell Church, Dumfriesshire.
4. This was a high point for British art
Far from a ‘dark’ time when all the lights went out, the early medieval period saw the creation of some of the nation’s finest artworks. The discovery of the Sutton Hoo ship burial on the eve of the Second World War redefined how the Anglo-Saxons were perceived. The incredible beauty of the jewellery, together with the sophisticated trade links indicated by the array of finds, revealed a court that was well connected and influential. After the arrival of Christian missionaries in 597 AD, Anglo-Saxons had to get to grips with completely new technologies. Although having never made books before, within a generation or two they were creating remarkable manuscripts such as the Lindisfarne Gospels and the earliest surviving single copy of the Vulgate Bible, the Codex Amiatinus. They also invented a new form of art: the standing stone high cross. Arguably the most expressive is the Ruthwell Cross, where the cross itself speaks of Christ’s passion, through the runic poetry carved on its sides.
Finds from the Staffordshire Hoard which was discovered in 2009, the largest hoard of Anglo-Saxon gold and silver metalwork yet found. (Birmingham Museum)
5. There is still so much to discover
With many periods in history, it can be difficult to find something new to explore or write about. Not so with the early medieval period. There are relatively few early medievalists, and a wealth of research still to be done. What’s more, advances in archaeology are only recently bringing information to light about how people in this period lived. When societies build more in timber than in stone, it can be hard to find evidence in the archaeological record, but more is coming to light now than ever before. There are the surprise discoveries: manuscripts long hidden in archives, hoards concealed in fields, references only recently translated. There is still so much to be done, and this is a rich and rewarding period to immerse yourself in.
Dr Janina Ramirez is a British art and cultural historian and television presenter.

The Lindisfarne Gospels, carpet page and incipit, c700-20. (British Library)
1. Why is the period known as ‘dark’?
The term ‘Dark Age’ was used by the Italian scholar and poet Petrarch in the 1330s to describe the decline in later Latin literature following the collapse of the Western Roman empire. In the 20th century, scholars used the term more specifically in relation to the 5th-10th centuries, but now it is largely seen as a derogatory term, concerned with contrasting periods of perceived enlightenment with cultural ignorance. A very quick glance at the remarkable manuscripts, metalwork, texts, buildings and individuals that saturate the early medieval period reveals that ‘Dark Age’ is now very much an out-of-date term. It’s best used as a point of reference against which to show how vibrant the time in fact was.

Gold, garnet and glass shoulder clasps from the Sutton Hoo Ship Burial, c 625AD. (British Museum)
2. It was religiously diverse
The early medieval period was characterised by widespread adherence to Christianity. However, there was a great deal of religious variety, and even the Christian church itself was a diverse and complicated entity. In the north, Scandinavia and parts of Germany adhered to Germanic paganism, with Iceland converting to Christianity in 1000 AD. Folk religious practices continued. Late in the 8th century, an Anglo-Saxon monk called Alcuin questioned why heroic legend still fascinated Christians, asking: “What has Ingeld to do with Christ?” Within the church there were many lines of divisions. For example, Monophysitism divided society and the church, arguing that Jesus had just one nature, rather than two: human and divine, which caused division to the level of emperors, states and nations.

The Franks Casket, carved on whale bone, with runic poetry and showing scenes of the nativity and Weland’s revenge, c700. (British Museum)
3. It was not a time of illiteracy and ignorance
The connection between illiteracy and ignorance is a relatively modern phenomenon. For most of the medieval period and beyond, the majority of information was transmitted orally and retained through memory. Societies such as that of the early Anglo-Saxons could recall everything from land deeds, marital associations and epic poetry. The ‘scop’ or minstrel could recite a single epic over many days, indicating hugely sophisticated mental retention. With the establishment of monasteries, literacy was largely confined within their walls. Yet in places like the holy community at Lindisfarne, the monks were able to create sophisticated theological texts, and extraordinary manuscripts.

Panels from the Ruthwell Cross showing Jesus with Mary Magdalene, and runic passages from ‘The Dream of the Rood’, 8th century, Ruthwell Church, Dumfriesshire.
4. This was a high point for British art
Far from a ‘dark’ time when all the lights went out, the early medieval period saw the creation of some of the nation’s finest artworks. The discovery of the Sutton Hoo ship burial on the eve of the Second World War redefined how the Anglo-Saxons were perceived. The incredible beauty of the jewellery, together with the sophisticated trade links indicated by the array of finds, revealed a court that was well connected and influential. After the arrival of Christian missionaries in 597 AD, Anglo-Saxons had to get to grips with completely new technologies. Although having never made books before, within a generation or two they were creating remarkable manuscripts such as the Lindisfarne Gospels and the earliest surviving single copy of the Vulgate Bible, the Codex Amiatinus. They also invented a new form of art: the standing stone high cross. Arguably the most expressive is the Ruthwell Cross, where the cross itself speaks of Christ’s passion, through the runic poetry carved on its sides.

Finds from the Staffordshire Hoard which was discovered in 2009, the largest hoard of Anglo-Saxon gold and silver metalwork yet found. (Birmingham Museum)
5. There is still so much to discover
With many periods in history, it can be difficult to find something new to explore or write about. Not so with the early medieval period. There are relatively few early medievalists, and a wealth of research still to be done. What’s more, advances in archaeology are only recently bringing information to light about how people in this period lived. When societies build more in timber than in stone, it can be hard to find evidence in the archaeological record, but more is coming to light now than ever before. There are the surprise discoveries: manuscripts long hidden in archives, hoards concealed in fields, references only recently translated. There is still so much to be done, and this is a rich and rewarding period to immerse yourself in.
Dr Janina Ramirez is a British art and cultural historian and television presenter.
Published on February 06, 2017 02:30
February 5, 2017
Queen Victoria's 50 inch drawers... and other items from the royal wardrobe with Lucy Worsley
History Extra
Lucy Worsley with George III's waistcoat. (BBC Silver River)
Clothes for a ‘mad’ king, George III Among the most poignant surviving garments in Historic Royal Palaces’ bizarre and brilliant Royal Ceremonial Dress Collection are some of the clothes made for George III (1738–1820) to wear during his episode of so-called ‘madness’.
Today, some people believe that it was bipolar disorder; others that it was the painful physical illness porphyria. Either way, there were long periods when the king was unable to look after himself. He was fed from a jug with a spout, and helped into his clothes by his valet.
The ‘mad’ king’s shirts have extra-wide shoulders so that servants could lift them over his head, and the waistcoat that I’m pictured with above has sleeves enlarged so that his arms could be poked in more easily. The waistcoat also bears orange-coloured stains of food, drink or dribble. When this particular item was sent by a palace servant to the souvenir-hunter who asked for it, it was accompanied by apologies. It was the only item available, the servant said – the rest of the king’s clothes were just too soiled.
George’s periods of absence from politics, through illness, caused dismay and confusion, especially when his son the Prince Regent tried to make political capital out of them. But the sheer length of his reign brought stability, and there’s no doubt that George III was a conscientious and well-intentioned king.
Pants belonging to a queen who ate “a little too much”
Queen Victoria (1819–1901) was a wand of a girl when she came to the throne at 18, with a waist measuring 22 inches. Yet she always struggled with her appetite and, as the capacious drawers pictured below show, this was to catch up with her in later life.
“She is incredibly precocious for her age and very comical,” said her grandmother. “I have never seen a more alert and forthcoming child.” However Victoria was also described as a “little princess [who] eats a little too much and almost always a little too fast”.
In 1861 came the great catastrophe of her life, the death of her husband, Albert. With this, Victoria lost the only person who’d been able to call her ‘Victoria’ instead of ‘Your Majesty’, and – crucially – the only person who could tell her ‘no’.
During the following decades, Victoria became what we would today call clinically depressed. She lost interest in life, became reclusive, and comforted herself with food. She was either indulged by her doctors (frightened of her, like everyone else) or else bullied by the politicians who wanted her to get on with being queen again.
As a widow, Victoria refused to wear any colour other than black for her bodices and skirts. These were offset only by a white widow’s cap, and white underwear threaded with black ribbons. A pair of her drawers, kept at Kensington Palace, has a waist measurement even more impressive than George IV’s, considering their relative height: 50 inches!
Queen Victoria's chemise and split drawers, which measured an ''impressive'' 50 inches. (Historic Royal Palaces)
The enormous breeches of George IV Henry VIII was, notoriously, 54 inches around the waist later on in life, and George IV (1762–1830) equalled his record. George IV’s greatest problem seems to have been mental: he lacked the toughness to stand up to a life lived in public. He wasn’t untalented – he had terrific visual flair, and his wife said he would have made a great hairdresser. It was just a shame he had to be king.
As well as over-eating, George IV became an alcoholic, and an addict of opium in the form of laudanum. The Duke of Wellington described a tremendous meal eaten later on in the king’s life: “What do you think of his breakfast yesterday? A Pidgeon and Beef Steak Pye of which he ate two Pigeons and three Beefsteaks, Three parts of a bottle of Mozelle, a Glass of Dry Champagne, two Glasses of Port [and] a Glass of Brandy.”
The failure of George’s breeches to fasten formed a key part of his image in the cutting caricatures produced by Cruikshank and others, which caused lasting damage to his reputation. George tried to reduce his stomach with what was called his ‘belt’, a kind of simple corset. A paper pattern for the original still survives at the Museum of London, and from it, a replica has been made for Brighton’s Royal Pavilion.
When George was due to be painted by Sir David Wilkie, the portraitist was kept waiting for some hours while the king’s servants trussed him into his undergarments. When the king finally appeared, Wilkie said he looked like “a great sausage stuffed into the covering”.
George IV’s declining health made him increasingly reclusive. On the one hand, his failure to act on the terrible social problems caused by the industrial revolution brought him vilification. On the other, his relative invisibility meant that no one actually took the trouble to bring him down: unlike the French monarchy, the British one survived intact.
The future Edward VIII’s flashy safari suit
George V, a monarch devoted to decorum and duty, was a stickler for correct dress. From early on, the flair for fashion exhibited by his eldest son, the future Edward VIII (1894–1972), caused trouble between them, and it presaged disaster of a far more serious sort.
George V disliked Edward’s turned-up trousers, and his loud, hounds-tooth tweeds. He thought it regrettable that Edward would come to tea in his riding clothes, and fail to wear gloves at a ball. The safari suit is a typical outfit of Edward’s. Self-designed, it has adjustable arms and legs that could be altered for whichever leisure pursuit he wished to follow on any particular day of a holiday in exotic Africa.
Edward’s youthful, bareheaded style found him fame and popularity with young people all over the world. But there was something solid in George V’s misgivings. Edward’s flashy, informal clothes hinted that he wasn’t looking forward to wearing a crown, and in 1936, he finally refused to do so when he abdicated from the throne.
Left image: The future Edward VIII's safari suit, with arms and legs that could be altered for different leisure pursuits. (BBC Silver River) Right image: Edward sporting a casual tweed suit, which was guaranteed to raise his father's ire, c1920. (Mary Evans)
William III’s vest and socks With his small stature and twisted spine, William III and II (1650–1702) wasn’t a king in the charismatic, physically impressive mould of Henry VIII. He rebuilt the countryside palace at Hampton Court partly because of his asthma: he needed fresh air because he couldn’t breathe in the damp, smoky, urban palace of Whitehall. This is all evident from his underwear, which looks like it was intended for a 12-year-old. These are tiny clothes for a tiny king, including green royal socks with clocks topped by a little crown.
William had deposed his father-in-law, James II and VII, to become joint ruler with his wife (and James’s daughter) Mary II. And in fact he was the ideal king to make a complicated, ambivalent and lashed-up succession work. William compensated for his physical weakness through political nous and the pragmatism that he’d developed during earlier years as a leader of the various, and sometimes unruly, states of the Netherlands.
With his hooked nose and opaque personality, William’s new British subjects found him distinctly unregal: hard to read, as well as physically puny. But today his achievements in seizing a throne with little bloodshed and achieving stability make him appear an exceptionally able king.
Lucy Worsley is a writer, presenter and chief curator of the Historic Royal Palaces.

Lucy Worsley with George III's waistcoat. (BBC Silver River)
Clothes for a ‘mad’ king, George III Among the most poignant surviving garments in Historic Royal Palaces’ bizarre and brilliant Royal Ceremonial Dress Collection are some of the clothes made for George III (1738–1820) to wear during his episode of so-called ‘madness’.
Today, some people believe that it was bipolar disorder; others that it was the painful physical illness porphyria. Either way, there were long periods when the king was unable to look after himself. He was fed from a jug with a spout, and helped into his clothes by his valet.
The ‘mad’ king’s shirts have extra-wide shoulders so that servants could lift them over his head, and the waistcoat that I’m pictured with above has sleeves enlarged so that his arms could be poked in more easily. The waistcoat also bears orange-coloured stains of food, drink or dribble. When this particular item was sent by a palace servant to the souvenir-hunter who asked for it, it was accompanied by apologies. It was the only item available, the servant said – the rest of the king’s clothes were just too soiled.
George’s periods of absence from politics, through illness, caused dismay and confusion, especially when his son the Prince Regent tried to make political capital out of them. But the sheer length of his reign brought stability, and there’s no doubt that George III was a conscientious and well-intentioned king.
Pants belonging to a queen who ate “a little too much”
Queen Victoria (1819–1901) was a wand of a girl when she came to the throne at 18, with a waist measuring 22 inches. Yet she always struggled with her appetite and, as the capacious drawers pictured below show, this was to catch up with her in later life.
“She is incredibly precocious for her age and very comical,” said her grandmother. “I have never seen a more alert and forthcoming child.” However Victoria was also described as a “little princess [who] eats a little too much and almost always a little too fast”.
In 1861 came the great catastrophe of her life, the death of her husband, Albert. With this, Victoria lost the only person who’d been able to call her ‘Victoria’ instead of ‘Your Majesty’, and – crucially – the only person who could tell her ‘no’.
During the following decades, Victoria became what we would today call clinically depressed. She lost interest in life, became reclusive, and comforted herself with food. She was either indulged by her doctors (frightened of her, like everyone else) or else bullied by the politicians who wanted her to get on with being queen again.
As a widow, Victoria refused to wear any colour other than black for her bodices and skirts. These were offset only by a white widow’s cap, and white underwear threaded with black ribbons. A pair of her drawers, kept at Kensington Palace, has a waist measurement even more impressive than George IV’s, considering their relative height: 50 inches!

Queen Victoria's chemise and split drawers, which measured an ''impressive'' 50 inches. (Historic Royal Palaces)
The enormous breeches of George IV Henry VIII was, notoriously, 54 inches around the waist later on in life, and George IV (1762–1830) equalled his record. George IV’s greatest problem seems to have been mental: he lacked the toughness to stand up to a life lived in public. He wasn’t untalented – he had terrific visual flair, and his wife said he would have made a great hairdresser. It was just a shame he had to be king.
As well as over-eating, George IV became an alcoholic, and an addict of opium in the form of laudanum. The Duke of Wellington described a tremendous meal eaten later on in the king’s life: “What do you think of his breakfast yesterday? A Pidgeon and Beef Steak Pye of which he ate two Pigeons and three Beefsteaks, Three parts of a bottle of Mozelle, a Glass of Dry Champagne, two Glasses of Port [and] a Glass of Brandy.”
The failure of George’s breeches to fasten formed a key part of his image in the cutting caricatures produced by Cruikshank and others, which caused lasting damage to his reputation. George tried to reduce his stomach with what was called his ‘belt’, a kind of simple corset. A paper pattern for the original still survives at the Museum of London, and from it, a replica has been made for Brighton’s Royal Pavilion.
When George was due to be painted by Sir David Wilkie, the portraitist was kept waiting for some hours while the king’s servants trussed him into his undergarments. When the king finally appeared, Wilkie said he looked like “a great sausage stuffed into the covering”.
George IV’s declining health made him increasingly reclusive. On the one hand, his failure to act on the terrible social problems caused by the industrial revolution brought him vilification. On the other, his relative invisibility meant that no one actually took the trouble to bring him down: unlike the French monarchy, the British one survived intact.
The future Edward VIII’s flashy safari suit
George V, a monarch devoted to decorum and duty, was a stickler for correct dress. From early on, the flair for fashion exhibited by his eldest son, the future Edward VIII (1894–1972), caused trouble between them, and it presaged disaster of a far more serious sort.
George V disliked Edward’s turned-up trousers, and his loud, hounds-tooth tweeds. He thought it regrettable that Edward would come to tea in his riding clothes, and fail to wear gloves at a ball. The safari suit is a typical outfit of Edward’s. Self-designed, it has adjustable arms and legs that could be altered for whichever leisure pursuit he wished to follow on any particular day of a holiday in exotic Africa.
Edward’s youthful, bareheaded style found him fame and popularity with young people all over the world. But there was something solid in George V’s misgivings. Edward’s flashy, informal clothes hinted that he wasn’t looking forward to wearing a crown, and in 1936, he finally refused to do so when he abdicated from the throne.

Left image: The future Edward VIII's safari suit, with arms and legs that could be altered for different leisure pursuits. (BBC Silver River) Right image: Edward sporting a casual tweed suit, which was guaranteed to raise his father's ire, c1920. (Mary Evans)
William III’s vest and socks With his small stature and twisted spine, William III and II (1650–1702) wasn’t a king in the charismatic, physically impressive mould of Henry VIII. He rebuilt the countryside palace at Hampton Court partly because of his asthma: he needed fresh air because he couldn’t breathe in the damp, smoky, urban palace of Whitehall. This is all evident from his underwear, which looks like it was intended for a 12-year-old. These are tiny clothes for a tiny king, including green royal socks with clocks topped by a little crown.
William had deposed his father-in-law, James II and VII, to become joint ruler with his wife (and James’s daughter) Mary II. And in fact he was the ideal king to make a complicated, ambivalent and lashed-up succession work. William compensated for his physical weakness through political nous and the pragmatism that he’d developed during earlier years as a leader of the various, and sometimes unruly, states of the Netherlands.
With his hooked nose and opaque personality, William’s new British subjects found him distinctly unregal: hard to read, as well as physically puny. But today his achievements in seizing a throne with little bloodshed and achieving stability make him appear an exceptionally able king.
Lucy Worsley is a writer, presenter and chief curator of the Historic Royal Palaces.
Published on February 05, 2017 01:30
February 4, 2017
A murder of crows: 10 collective nouns you didn’t realise originate from the Middle Ages
History Extra
Aesop's fables: The Fox and The Crow. Illustration after 1485 edition printed in Naples by German printers for Francesco del Tuppo. (Photo by Culture Club/Getty Images
Why are geese in a gaggle? And are crows really murderous? Collective nouns are one of the most charming oddities of the English language, often with seemingly bizarre connections to the groups they identify. But have you ever stopped to wonder where these peculiar terms actually came from?
Many of them were first recorded in the 15th century in publications known as Books of Courtesy – manuals on the various aspects of noble living, designed to prevent young aristocrats from embarrassing themselves by saying the wrong thing at court.
The earliest of these documents to survive to the present day was The Egerton Manuscript, dating from around 1450, which featured a list of 106 collective nouns. Several other manuscripts followed, the most influential of which appeared in 1486 in The Book of St Albans – a treatise on hunting, hawking and heraldry, written mostly in verse and attributed to the nun Dame Juliana Barnes (sometimes written Berners), prioress of the Priory of St Mary of Sopwell, near the town of St Albans.
This list features 164 collective nouns, beginning with those describing the ‘beasts of the chase’, but extending to include a wide range of animals and birds and, intriguingly, an extensive array of human professions and types of person.
Those describing animals and birds have diverse sources of inspiration. Some are named for the characteristic behaviour of the animals (‘a leap of leopards’, ‘a busyness of ferrets’), or by the use they were put to by humans (‘a yoke of oxen’, ‘a burden of mules’). Sometimes they’re given group nouns that describe their young (‘a covert of coots’, ‘a kindle of kittens’), and others by the way they respond when flushed (‘a sord of mallards’, ‘a rout of wolves’).
Many of those describing people and professions go further still in revealing the medieval mindset of their inventors, opening a window into the past from which we can enjoy a fascinating view of the medieval world.
1) “A tabernacle of bakers”
Bread was the mainstay of a medieval peasant’s diet, with meat, fish and dairy produce too expensive to be eaten any more than once or twice a week. Strict laws governing the distribution of bread stated that no baker was allowed to sell his bread from beside his own oven, and must instead purvey his produce from a stall at one of the king’s approved markets.
These small, portable shops were known in Middle English as ‘tabernacula’, which were defined by Dutch lexicographer Junius Hadrianus in his Nomenclature, which was first translated into English in 1585, as ‘little shops made of boards’.
2) “A stalk of foresters” The role of a forester in medieval society was respectable and well paid. Geoffrey Chaucer held the position in the royal forest of North Petherton in Somerset, and records from 1394 show that he was granted an annual pension of £20 by Richard II – a sum that reflected the importance of the role to hunting-mad noblemen.
A forester’s duties included protecting the forest’s stock of game birds, deer and other animals from poachers. From time to time they also stalked criminals, who took to the forests to evade capture.
3) “A melody of harpers”
Depicted in wall paintings in Ancient Egyptian tombs, the harp is one of the oldest musical instruments in the world, and by the medieval period – the age of troubadours and minstrels – was experiencing a surge in popularity.
This was an era defined by its emphasis on knightly tradition, and the harp often accompanied songs about valiant deeds and courtly love. In great demand at the estates of the upper classes, travelling harpists often moved from town to town performing instrumental accompaniment at banquets and recitals of madrigal singing. There were high-born harpists, too: both Henry VIII and Anne Boleyn were keen players.
4) “A sentence of judges”
Up until the 12th century, the law was deeply rooted in the feudal system, whereby the lord of the manor could charge and punish perpetrators of crime – often poaching from his land – as he saw fit. But in 1166, Henry II sought to shift the power away from individual landowners and bring it more directly under his own control.
He established the courts of assizes, where a national bench of judges travelled around the country attending quarterly court sessions. These judges based their decisions on a new set of national laws that were common to all people, which is where we get the term ‘common law’.
Though more egalitarian than the manorial system, assizes judges could be harsh in the sentences they delivered, which ranged from a stint in the stocks to public execution.
5) “A faith of merchants”
Merchants lived outside the rigid structure of feudalism, and their growing success in the 15th century had an enormous impact on the structure of society. They formed guilds of fellow traders, which eventually bought charters directly from the king, allowing the towns to become independent of the lord of the manor. ‘
Faith’ as it is used here was a reference to the trustworthiness of a person, and is meant ironically, since merchants were rarely trusted. Court documents from the time record the various tricks of the trade that were used to con the public, including hiding bad grain under good, and stitching undersized coal sacks to disguise small measures of coal.
All offences were officially punishable by a stint in the pillory, but because the guilds were self-regulated, most perpetrators got off with only a fine – to the inevitable anger of the masses.
6) “An abominable sight of monks” Monks weren’t particularly popular during the 15th century. Bede’s Life of Cuthbert, dated around AD 725, is the story of a party of monks who almost drowned when their boat was caught in a storm on the River Tyne. Cuthbert pleaded with the peasants on the bank for help but “the rustics, turning on him with angry minds and angry mouths, exclaimed, ‘Nobody shall pray for them: May God spare none of them! For they have taken away from men the ancient rites and customs, and how the new ones are to be attended to, nobody knows.’”
By the 15th century this resentment of the trampling of pagan traditions had been exacerbated by a perception of monks as being well fed and comfortable while the general population starved. ‘Abominable’ is defined by the Oxford English Dictionary as “causing moral revulsion”, which is a fairly accurate description of the reaction this image provoked.
7) “A superfluity of nuns” Superfluity can be interpreted in two ways – the first is as historical fact. There were around 138 nunneries in England between 1270 and 1536, many of which were severely overcrowded. The convent was seen as a natural step for the daughters of the nobility who had passed marriageable age, and lords put pressure on prioresses to accept their daughters even if they were already full.
Alternatively, though, the excess of nuns referred to here could have been a comment on the emerging view among agitators for church reform that the days of the monastery and convent were over. Some 50 years after this noun appeared in print in The Book of St Albans Henry VIII ordered their closure, and the Protestant Reformation was in full swing.
8) “A stud of horses” Horses were at the absolute centre of life in the Middle Ages. Rather than the breeds we’re familiar with today, medieval horses were classified by the role they played in society. There were destriers, stallions that were used as warhorses by royalty and lords; palfreys, bred for general-purpose riding, war and travel, usually owned by the wealthy; coursers, steady cavalry horses; and rouncies – common-grade hack horses of no special breeding.
During the Middle Ages, monasteries often ran breeding centres called stud farms – ‘stud’ has its roots in the German word ‘Stute’, meaning mare. State stud farms also existed: the first was established under Louis XIV of France in 1665, by which time ‘a stud of horses’ was already established as the proper collective.
9) ‘A pack/cry/kennel of hounds’ Hunting dogs were important members of the medieval household. Every noble family kept kennels for their dogs, and these were looked after by a team of dedicated servants.
‘A cry of hounds’ is thought to derive from the hunting cry that instructs the hounds in their pursuit. The traditional English hunting call ‘Tally Ho!’ is a shortening of ‘Tallio, hoix, hark, forward,’ which, according to an 1801 edition of The Sporting Magazine, is an Anglicized version of the French terms ‘Thia-hilaud’ and ‘a qui forheur’, which appear in La Vénerie by Jacques du Fouilloux, first published at Poitiers in 1561.
This was adapted into English by George Gascoigne under the title The Noble Arte of Venerie, and became one of the pillars of a young gentleman’s hunting education.
10) “A richesse of martens” The European pine marten was considered a top prize for hunters in the Middle Ages. Of all the ‘vermin of the chase’, which included foxes, wild cats, polecats and squirrels, the marten was the most sought after because of its valuable pelt.
Tudor ‘statutes of apparel’ – strict laws governing the amount of money the people could spend on clothing – dictated the colours, cuts and materials that could be worn by each level of society, and stated which furs could be worn by which tier of the aristocracy. Only those of or above the rank of duke, marquise and earl were allowed to wear sable fur, while ermine, the white winter coat of the stoat, which could only be obtained for a few months of the year, was reserved for royalty.
Chloe Rhodes’ An Unkindness of Ravens: A Book of Collective Nouns is published by Michael O'Mara.

Aesop's fables: The Fox and The Crow. Illustration after 1485 edition printed in Naples by German printers for Francesco del Tuppo. (Photo by Culture Club/Getty Images
Why are geese in a gaggle? And are crows really murderous? Collective nouns are one of the most charming oddities of the English language, often with seemingly bizarre connections to the groups they identify. But have you ever stopped to wonder where these peculiar terms actually came from?
Many of them were first recorded in the 15th century in publications known as Books of Courtesy – manuals on the various aspects of noble living, designed to prevent young aristocrats from embarrassing themselves by saying the wrong thing at court.
The earliest of these documents to survive to the present day was The Egerton Manuscript, dating from around 1450, which featured a list of 106 collective nouns. Several other manuscripts followed, the most influential of which appeared in 1486 in The Book of St Albans – a treatise on hunting, hawking and heraldry, written mostly in verse and attributed to the nun Dame Juliana Barnes (sometimes written Berners), prioress of the Priory of St Mary of Sopwell, near the town of St Albans.
This list features 164 collective nouns, beginning with those describing the ‘beasts of the chase’, but extending to include a wide range of animals and birds and, intriguingly, an extensive array of human professions and types of person.
Those describing animals and birds have diverse sources of inspiration. Some are named for the characteristic behaviour of the animals (‘a leap of leopards’, ‘a busyness of ferrets’), or by the use they were put to by humans (‘a yoke of oxen’, ‘a burden of mules’). Sometimes they’re given group nouns that describe their young (‘a covert of coots’, ‘a kindle of kittens’), and others by the way they respond when flushed (‘a sord of mallards’, ‘a rout of wolves’).
Many of those describing people and professions go further still in revealing the medieval mindset of their inventors, opening a window into the past from which we can enjoy a fascinating view of the medieval world.
1) “A tabernacle of bakers”
Bread was the mainstay of a medieval peasant’s diet, with meat, fish and dairy produce too expensive to be eaten any more than once or twice a week. Strict laws governing the distribution of bread stated that no baker was allowed to sell his bread from beside his own oven, and must instead purvey his produce from a stall at one of the king’s approved markets.
These small, portable shops were known in Middle English as ‘tabernacula’, which were defined by Dutch lexicographer Junius Hadrianus in his Nomenclature, which was first translated into English in 1585, as ‘little shops made of boards’.
2) “A stalk of foresters” The role of a forester in medieval society was respectable and well paid. Geoffrey Chaucer held the position in the royal forest of North Petherton in Somerset, and records from 1394 show that he was granted an annual pension of £20 by Richard II – a sum that reflected the importance of the role to hunting-mad noblemen.
A forester’s duties included protecting the forest’s stock of game birds, deer and other animals from poachers. From time to time they also stalked criminals, who took to the forests to evade capture.
3) “A melody of harpers”
Depicted in wall paintings in Ancient Egyptian tombs, the harp is one of the oldest musical instruments in the world, and by the medieval period – the age of troubadours and minstrels – was experiencing a surge in popularity.
This was an era defined by its emphasis on knightly tradition, and the harp often accompanied songs about valiant deeds and courtly love. In great demand at the estates of the upper classes, travelling harpists often moved from town to town performing instrumental accompaniment at banquets and recitals of madrigal singing. There were high-born harpists, too: both Henry VIII and Anne Boleyn were keen players.
4) “A sentence of judges”
Up until the 12th century, the law was deeply rooted in the feudal system, whereby the lord of the manor could charge and punish perpetrators of crime – often poaching from his land – as he saw fit. But in 1166, Henry II sought to shift the power away from individual landowners and bring it more directly under his own control.
He established the courts of assizes, where a national bench of judges travelled around the country attending quarterly court sessions. These judges based their decisions on a new set of national laws that were common to all people, which is where we get the term ‘common law’.
Though more egalitarian than the manorial system, assizes judges could be harsh in the sentences they delivered, which ranged from a stint in the stocks to public execution.
5) “A faith of merchants”
Merchants lived outside the rigid structure of feudalism, and their growing success in the 15th century had an enormous impact on the structure of society. They formed guilds of fellow traders, which eventually bought charters directly from the king, allowing the towns to become independent of the lord of the manor. ‘
Faith’ as it is used here was a reference to the trustworthiness of a person, and is meant ironically, since merchants were rarely trusted. Court documents from the time record the various tricks of the trade that were used to con the public, including hiding bad grain under good, and stitching undersized coal sacks to disguise small measures of coal.
All offences were officially punishable by a stint in the pillory, but because the guilds were self-regulated, most perpetrators got off with only a fine – to the inevitable anger of the masses.

6) “An abominable sight of monks” Monks weren’t particularly popular during the 15th century. Bede’s Life of Cuthbert, dated around AD 725, is the story of a party of monks who almost drowned when their boat was caught in a storm on the River Tyne. Cuthbert pleaded with the peasants on the bank for help but “the rustics, turning on him with angry minds and angry mouths, exclaimed, ‘Nobody shall pray for them: May God spare none of them! For they have taken away from men the ancient rites and customs, and how the new ones are to be attended to, nobody knows.’”
By the 15th century this resentment of the trampling of pagan traditions had been exacerbated by a perception of monks as being well fed and comfortable while the general population starved. ‘Abominable’ is defined by the Oxford English Dictionary as “causing moral revulsion”, which is a fairly accurate description of the reaction this image provoked.
7) “A superfluity of nuns” Superfluity can be interpreted in two ways – the first is as historical fact. There were around 138 nunneries in England between 1270 and 1536, many of which were severely overcrowded. The convent was seen as a natural step for the daughters of the nobility who had passed marriageable age, and lords put pressure on prioresses to accept their daughters even if they were already full.
Alternatively, though, the excess of nuns referred to here could have been a comment on the emerging view among agitators for church reform that the days of the monastery and convent were over. Some 50 years after this noun appeared in print in The Book of St Albans Henry VIII ordered their closure, and the Protestant Reformation was in full swing.
8) “A stud of horses” Horses were at the absolute centre of life in the Middle Ages. Rather than the breeds we’re familiar with today, medieval horses were classified by the role they played in society. There were destriers, stallions that were used as warhorses by royalty and lords; palfreys, bred for general-purpose riding, war and travel, usually owned by the wealthy; coursers, steady cavalry horses; and rouncies – common-grade hack horses of no special breeding.
During the Middle Ages, monasteries often ran breeding centres called stud farms – ‘stud’ has its roots in the German word ‘Stute’, meaning mare. State stud farms also existed: the first was established under Louis XIV of France in 1665, by which time ‘a stud of horses’ was already established as the proper collective.

9) ‘A pack/cry/kennel of hounds’ Hunting dogs were important members of the medieval household. Every noble family kept kennels for their dogs, and these were looked after by a team of dedicated servants.
‘A cry of hounds’ is thought to derive from the hunting cry that instructs the hounds in their pursuit. The traditional English hunting call ‘Tally Ho!’ is a shortening of ‘Tallio, hoix, hark, forward,’ which, according to an 1801 edition of The Sporting Magazine, is an Anglicized version of the French terms ‘Thia-hilaud’ and ‘a qui forheur’, which appear in La Vénerie by Jacques du Fouilloux, first published at Poitiers in 1561.
This was adapted into English by George Gascoigne under the title The Noble Arte of Venerie, and became one of the pillars of a young gentleman’s hunting education.
10) “A richesse of martens” The European pine marten was considered a top prize for hunters in the Middle Ages. Of all the ‘vermin of the chase’, which included foxes, wild cats, polecats and squirrels, the marten was the most sought after because of its valuable pelt.
Tudor ‘statutes of apparel’ – strict laws governing the amount of money the people could spend on clothing – dictated the colours, cuts and materials that could be worn by each level of society, and stated which furs could be worn by which tier of the aristocracy. Only those of or above the rank of duke, marquise and earl were allowed to wear sable fur, while ermine, the white winter coat of the stoat, which could only be obtained for a few months of the year, was reserved for royalty.
Chloe Rhodes’ An Unkindness of Ravens: A Book of Collective Nouns is published by Michael O'Mara.
Published on February 04, 2017 02:00
February 3, 2017
Scientists Unravel Secrets of a Hidden Room Within a Hidden Room in English Tudor Mansion
Ancient Origins
A team of scientists provided with 3D laser scanners have disclosed the secrets of a hidden room, known as a "priest hole," in the tower of an English Tudor mansion linked to the failed "Gunpowder Plot" to assassinate King James I in 1605.
New Study Reveals Secrets of the “Priest Holes”
The secrets of a hidden room in Coughton Court, a Tudor mansion associated with the plot to assassinate King James I in 1605, have been revealed in a new study. The double room was a hiding place for priests during the anti-Catholic persecutions of the 16th and 17th centuries and was leased by Sir Everard Digby, one of the leading conspirators of the plot. According to Christopher King, an assistant professor in the Department of Archaeology at the University of Nottingham in the United Kingdom, and one of the lead researchers of the study, the “hidden” priest holes were originally built inside walls and between floors, as places where a priest could hide from their prosecutors, while the family of the house pretended to supposedly live an ordinary life. King told Live Science "We know that priests were hiding in these spaces for up to three days while people were searching the properties. Some of them are really very small, where the priest would be in quite an enclosed box-like space."
Coughton Court, Warwickshire ( CC by SA 3.0 )
A Little History Behind That Dark Era for Catholicism
During the 16th century, Europe was under the religious leadership of the Roman Catholic Church. However, over time, protests against the Catholic Church and its influence eventually led to the formation of the Protestant movement. The separation of the Church of England from Rome under Henry VIII in 1537 brought England alongside this broad Reformation movement. Under the reign of Queen Elizabeth I of England , which began in 1558, Catholics were persecuted by law and priests were imprisoned, tortured, and frequently executed. As a result of this oppression, wealthy Catholic families began building secret chambers and passages in their homes called ‘priest holes’ in order to hide priests when the ‘priest hunters’ came searching. Priest hunters took their job very seriously, sometimes searching a house for days or even weeks. They would move furniture, lift floorboards, bang the walls for sounds of a hollow cavity, and plunge their swords between cracks and crevices. They counted windows from the outside and inside, and measured the height of ceilings and the length of walls, in the hope of detecting hidden chambers. Clearly, the priest holes had to be very cleverly constructed to evade such extensive searches.
The consequences if a priest were captured. Engraving by Gaspar Bouttats. ( Wikipedia)
Priest holes and were frequently built into fireplaces, attics, and staircases. Sometimes, a network of passages led to the final hiding place, at other times the priest hole was hidden inside another chamber, making it more difficult to find. However, more often than not the priest holes were tiny with no room to stand or move. Priests sometimes had to stay for days at a time with little to no food and water, and no sanitation. Sometimes, they would die of starvation or suffocation if the priest hunts went on for too long.
The Importance of the 3D Laser Scanning Equipment
The priest hole in Coughton Court was first discovered in the 1850s, but more details have now been revealed than ever before thanks to 3D laser scanners. In order to understand better how the priest hole was constructed and hidden from searchers, King and his colleagues used 3D laser scanning equipment to accurately spot the secret chambers and determine their location in relation to the rest of the building and its grounds.
The priest hole (in color) was built in a closed-off space in a tower of Coughton Court, as a place for Catholic priests to hide from search parties. Credit: University of Nottingham
The compound images and 3D computer models generated from the laser scans show the chamber's "double-blind" construction, which was constructed this way to deceive priest hunters into thinking they had found an empty priest hole, King told Live Science . “When they're searching, they think they've found the priest hole but it's empty, but actually the priest is hidden in the more concealed space beyond." And continues, "And that's what happens at Coughton: there's one chamber under the floor in the turret of the tower, and then there is another trap door that goes through into a second space, which we assume is where the priest was actually hiding." Ultimately, King’s colleague Dr. Lukasz Bonenberg of Nottingham University, emphasized on the significant help modern technology contributed to this project, “Terrestrial laser scanning is an important new technology for recording ancient monuments as they capture a huge amount of data very quickly and this is the first time that TLS has been used for the purpose of visualising hidden spaces inside Tudor houses. Digital visualizations of historic buildings are vital tools for helping the public to picture the past,” said as Daily Mail reports.
Top image: The priest hole at Coughton Court, England ( CC by SA 3.0 )
By Theodoros Karasavvas

A team of scientists provided with 3D laser scanners have disclosed the secrets of a hidden room, known as a "priest hole," in the tower of an English Tudor mansion linked to the failed "Gunpowder Plot" to assassinate King James I in 1605.
New Study Reveals Secrets of the “Priest Holes”
The secrets of a hidden room in Coughton Court, a Tudor mansion associated with the plot to assassinate King James I in 1605, have been revealed in a new study. The double room was a hiding place for priests during the anti-Catholic persecutions of the 16th and 17th centuries and was leased by Sir Everard Digby, one of the leading conspirators of the plot. According to Christopher King, an assistant professor in the Department of Archaeology at the University of Nottingham in the United Kingdom, and one of the lead researchers of the study, the “hidden” priest holes were originally built inside walls and between floors, as places where a priest could hide from their prosecutors, while the family of the house pretended to supposedly live an ordinary life. King told Live Science "We know that priests were hiding in these spaces for up to three days while people were searching the properties. Some of them are really very small, where the priest would be in quite an enclosed box-like space."

Coughton Court, Warwickshire ( CC by SA 3.0 )
A Little History Behind That Dark Era for Catholicism
During the 16th century, Europe was under the religious leadership of the Roman Catholic Church. However, over time, protests against the Catholic Church and its influence eventually led to the formation of the Protestant movement. The separation of the Church of England from Rome under Henry VIII in 1537 brought England alongside this broad Reformation movement. Under the reign of Queen Elizabeth I of England , which began in 1558, Catholics were persecuted by law and priests were imprisoned, tortured, and frequently executed. As a result of this oppression, wealthy Catholic families began building secret chambers and passages in their homes called ‘priest holes’ in order to hide priests when the ‘priest hunters’ came searching. Priest hunters took their job very seriously, sometimes searching a house for days or even weeks. They would move furniture, lift floorboards, bang the walls for sounds of a hollow cavity, and plunge their swords between cracks and crevices. They counted windows from the outside and inside, and measured the height of ceilings and the length of walls, in the hope of detecting hidden chambers. Clearly, the priest holes had to be very cleverly constructed to evade such extensive searches.

The consequences if a priest were captured. Engraving by Gaspar Bouttats. ( Wikipedia)
Priest holes and were frequently built into fireplaces, attics, and staircases. Sometimes, a network of passages led to the final hiding place, at other times the priest hole was hidden inside another chamber, making it more difficult to find. However, more often than not the priest holes were tiny with no room to stand or move. Priests sometimes had to stay for days at a time with little to no food and water, and no sanitation. Sometimes, they would die of starvation or suffocation if the priest hunts went on for too long.
The Importance of the 3D Laser Scanning Equipment
The priest hole in Coughton Court was first discovered in the 1850s, but more details have now been revealed than ever before thanks to 3D laser scanners. In order to understand better how the priest hole was constructed and hidden from searchers, King and his colleagues used 3D laser scanning equipment to accurately spot the secret chambers and determine their location in relation to the rest of the building and its grounds.

The compound images and 3D computer models generated from the laser scans show the chamber's "double-blind" construction, which was constructed this way to deceive priest hunters into thinking they had found an empty priest hole, King told Live Science . “When they're searching, they think they've found the priest hole but it's empty, but actually the priest is hidden in the more concealed space beyond." And continues, "And that's what happens at Coughton: there's one chamber under the floor in the turret of the tower, and then there is another trap door that goes through into a second space, which we assume is where the priest was actually hiding." Ultimately, King’s colleague Dr. Lukasz Bonenberg of Nottingham University, emphasized on the significant help modern technology contributed to this project, “Terrestrial laser scanning is an important new technology for recording ancient monuments as they capture a huge amount of data very quickly and this is the first time that TLS has been used for the purpose of visualising hidden spaces inside Tudor houses. Digital visualizations of historic buildings are vital tools for helping the public to picture the past,” said as Daily Mail reports.
Top image: The priest hole at Coughton Court, England ( CC by SA 3.0 )
By Theodoros Karasavvas
Published on February 03, 2017 01:30
February 2, 2017
Ground Hog Day

On this day in 1887, Groundhog Day, featuring a rodent meteorologist, is celebrated for the first time at Gobbler’s Knob in Punxsutawney, Pennsylvania. According to tradition, if a groundhog comes out of its hole on this day and sees its shadow, there will be six more weeks of winter weather; no shadow means an early spring.
Groundhog Day has its roots in the ancient Christian tradition of Candlemas Day, when clergy would bless and distribute candles needed for winter. The candles represented how long and cold the winter would be. Germans expanded on this concept by selecting an animal–the hedgehog–as a means of predicting weather. Once they came to America, German settlers in Pennsylvania continued the tradition, although they switched from hedgehogs to groundhogs, which were plentiful in the Keystone State.
Groundhogs, also called woodchucks and whose scientific name is Marmota monax, typically weigh 12 to 15 pounds and live six to eight years. They eat vegetables and fruits, whistle when they’re frightened or looking for a mate and can climb trees and swim. They go into hibernation in the late fall; during this time, their body temperatures drop significantly, their heartbeats slow from 80 to five beats per minute and they can lose 30 percent of their body fat. In February, male groundhogs emerge from their burrows to look for a mate (not to predict the weather) before going underground again. They come out of hibernation for good in March.
In 1887, a newspaper editor belonging to a group of groundhog hunters from Punxsutawney called the Punxsutawney Groundhog Club declared that Phil, the Punxsutawney groundhog, was America’s only true weather-forecasting groundhog. The line of groundhogs that have since been known as Phil might be America’s most famous groundhogs, but other towns across North America now have their own weather-predicting rodents, from Birmingham Bill to Staten Island Chuck to Shubenacadie Sam in Canada.
In 1993, the movie Groundhog Day starring Bill Murray popularized the usage of “groundhog day” to mean something that is repeated over and over. Today, tens of thousands of people converge on Gobbler’s Knob in Punxsutawney each February 2 to witness Phil’s prediction. The Punxsutawney Groundhog Club hosts a three-day celebration featuring entertainment and activities.
Published on February 02, 2017 02:00
February 1, 2017
Seeking eternity: 5,000 years of ancient Egyptian burial
History Extra
The Pyramids of Giza, Egypt. (Margaret Maitland)
Evidence of ancient Egyptian belief in life after death emerged as early as c4500 BC. Over the following millennia, the Egyptians’ preparations for eternal life changed significantly, with different styles of tombs, evolving mummification practices, and a wide variety of funerary objects.
Egypt’s dry climate, secure geographic position, and wealth of resources means that we have been left with an abundance of evidence about burial practices. In the early 19th century, Europe’s race to uncover ancient Egyptian mummies and treasures amounted to little better than tomb looting. However, recently the archaeological recording of burials and a systematic approach to their study have made it possible to make sense of Egypt’s changing funerary traditions.
An upcoming exhibition at the National Museum of Scotland, The Tomb: Ancient Egyptian Burial, charts the development of burial in ancient Egypt, and examines one of the first tombs to be excavated and recorded in detail: a tomb that was used and reused for more than 1,000 years.
An eternal holiday
The ancient Egyptians saw the afterlife as a potential extension of their lives on earth – but an idealised version, almost an eternal holiday. Preparations for the afterlife are first evidenced in prehistoric Egypt (c4500–3100 BC) by the placement in burials of pots containing food and drink for the deceased. Through this period an increasing number of provisions were placed alongside the dead, such as stone or pottery vessels, eye makeup palettes, flint tools, and beads.
In this early period, the dead were buried in pits, usually laid facing west, towards the setting sun. Throughout ancient Egyptian civilisation, the sun held particular importance. To reach the afterlife, the Egyptians hoped to join the sun god, who set every evening and was reborn at dawn each morning on his eternal journey. The pyramids built to house the tombs of Egyptian kings evoke the descending rays of the sun – a stairway to heaven – and were often given names with solar associations, such as ‘King Snefru Shines’. The Great Pyramid was named ‘The Horizon of King Khufu’. Some of the earliest detailed information about Egyptian beliefs comes from the first funerary texts, inscribed on the walls of the burial chambers in royal pyramids around 2400–2250 BC. These were magic spells intended to protect the king’s body, and to reanimate it after death in order to help him ascend to the heavens.
Some spells were intended to be recited at the funeral, while others were written in the first-person, to be spoken by the deceased king addressing the gods.
These pyramid texts reveal that, as well as wishing to join the sun god, the deceased hoped to become like the god Osiris. According to myth, Osiris was the first king of Egypt. He was the first person to be mummified and brought back to life after death, thus becoming ruler of the afterlife. This dual system of afterlife beliefs, relating to both solar and Osirian rebirth, was to characterise burial through the rest of ancient Egyptian history.
Clay statue of the afterlife god Osiris, from Umm el-Qaab, Abydos, c1295–1186 BC. (National Museums Scotland)
Inside the tomb
The tomb itself was intended help magically transform the dead into a semi-divine being like Osiris. Pyramids and royal tombs even had associated temples in which the king would be worshipped as a god after his death. Wealthy individuals had a decorated tomb-chapel – a public part of the tomb above ground, separate from the actual burial chambers – where relatives could visit to remember the deceased, and priests could make offerings and recite prayers.
In prehistoric burials, bodies were sometimes preserved naturally through the dry heat of the desert sand. The earliest active attempts to preserve human bodies involved wrapping the dead in resin-soaked linen, but the first real mummification took place during the pyramid age, or Old Kingdom (c2686–2134 BC). The body was dried using natron, a naturally occurring mixture of salts, and wrapped in linen. Sometimes the internal organs, the parts most likely to decay, were removed and mummified separately. The Egyptians believed that a person’s soul had the potential to leave the body to enjoy the afterlife, but it needed the preserved body as a resting place to return to each night.
During the Middle Kingdom (c2030–1800 BC), the funerary spells originally written inside royal pyramids began to appear on the coffins of priests and high officials. God of the afterlife Osiris also grew in importance. Rectangular coffins were slowly replaced by anthropoid coffins that depicted the deceased holding royal regalia, transformed into Osiris to ensure their resurrection. Wooden models depicting food and craft production were initially introduced to burials to magically provide for the dead and transfer their wealth and status into the afterlife. These developed into funerary figurines called shabtis, initially intended to act as a substitute for the deceased, in case they were called up by conscription to perform physical labour in the afterlife.
Wood anthropoid coffin of the estate overseer Khnumhotep, with gilded face, from Deir Rifa, Middle Kingdom c1940–1760 BC. (National Museums Scotland)
A major change in royal burial came In around 1530 BC, after a period of foreign occupation, when Egypt was reunified under the rule of a king from Thebes. Burial traditions in Thebes favoured rock-cut tombs in the cliffs on the west bank of the Nile, so royal pyramids were abandoned in favour of tombs hidden in the Valley of the Kings.
This period, the New Kingdom (c1550–1069 BC), marked the height of the ancient Egyptian empire. The civilisation was extremely wealthy; a fact reflected in the hundreds of tombs constructed by officials opposite the capital city of Thebes. In addition to magical items produced specifically for burial, such as coffins, canopic jars, and shabtis, the prosperous now wanted to take their wealth with them. They filled their tombs with all the beautiful things that they enjoyed in life, from jewellery to furniture. Another new innovations of this time was the Book of the Dead, a collection of magical spells developed out of the texts written inside royal pyramids almost 1,000 years earlier. For the first time, these spells were written and illustrated on papyrus scrolls, shrouds and amulets for the wealthy to take with them to the afterlife.
Clay shabtis and wooden shabti box, from Thebes (Luxor), c747–656 BC. (National Museums Scotland)
Our exhibition at the National Museum of Scotland focuses on a tomb built in the New Kingdom period for a chief of police and his wife. When it was excavated in 1857, detailed records and plans were made of the tomb’s layout and the objects found within it. It was enormous, carved 38m into the desert cliffs, followed by a shaft 6m deep leading to several burial chambers, making it larger than some of the royal tombs in the Valley of the Kings. Ironically, while the chief of police was in charge of the security in the Valley of the Kings, he wasn’t able to protect his own tomb, which was eventually robbed. The only surviving object from his burial is a beautiful pair statue [a statue depicting a couple].
Sandstone pair statue of the chief of police and his wife, c1290 BC. (National Museums Scotland)
A change in fortune
As Egypt lost control of its empire and became politically unstable (c1069–650 BC), the country fractured between the north and south and eventually succumbed to foreign rule. This change in fortune meant that people looked to cut costs in their burials – the people of Thebes could no longer afford to build new tombs and fill them with lavish burial goods. Since wood was scarce and expensive, a new form of mummy-case made from linen and plaster (cartonnage) was invented. Old tombs were reused and recycled. A number of objects excavated in the tomb featured in our exhibition reveal that it was reused by several individuals during at least two different time periods between 800 and 650 BC.
With widespread tomb looting and reuse, Egyptians worried that organs stored in canopic jars might become separated from the body. The integrity of the body was important, so internal organs were mummified individually and then returned to the body. Canopic jars were technically no longer needed, but some people still made solid dummy canopic jars for the sake of tradition and symbolic protection.
Objects from daily life were generally no longer placed in the tomb; instead the focus was entirely on magical items made specifically for burial. Originally just one single shabti figurine had been placed in the tomb, but this number quickly grew (as quality decreased significantly), eventually becoming a workforce for the afterlife.
Over time, funerary objects like shabtis and canopic jars began to disappear, and even coffins became rare. By the time Egypt became part of the Roman empire in 30 BC, burials were focused entirely on the body itself; the most commonly used burial items were shrouds and either a mask or classical-style portrait placed over the face of the mummified person. The realism of classical portraiture may have appealed to those wishing to preserve the body and bring the dead back to life: in a portrait, they appeared alive again. On the other hand, traditional practices invoked ancient magic: gilded mummy masks aimed to make the dead semi-divine, based on an age-old belief that the skin of the gods was made of gold.
Mummified man with a portrait-board fitted over the face, excavated at Hawara c80–120 AD. (National Museums Scotland)
The final reuse of the tomb featured in our exhibition was by an important Egyptian family who lived under the last pharaonic ruler Cleopatra and witnessed the conquest of Egypt by the first Roman emperor. The burials of the high-official Montsuef and his wife Tanuat can be dated specifically to 9 BC, by inscriptions on their funerary papyri. Unlike the earlier standardised Books of the Dead, these were personalised with vignettes about the couple, attesting to the virtuous lives they had led. The new Roman influences in this era are evident in the gold and copper wreath Montsuef’s body wore over a traditional gilded mask, a classical symbol of victory re-interpreted as a symbol of triumph over death.
Montsuef’s funerary canopy is an amazing object – completely unprecedented in the history of Egyptian burial. Yet other elements of it are entirely traditional, such as its Egyptian temple shape, royal cobras and winged sun-disk motifs. Like other objects from this period, it demonstrates how much Egypt was being transformed by external influence, but also just how determined the Egyptians were to hold onto their traditions in their pursuit of the afterlife. While shrouds decorated with ancient Egyptian symbols were produced until around the late third century AD, with the introduction of Christianity to Egypt, followed by Islam, burial practices that had lasted thousands of years were finally abandoned. Nevertheless, the ancient Egyptians still live on today, given eternal life through their extraordinary burial objects.
Wooden funerary canopy of Montsuef from Thebes, 9 BC. (National Museums Scotland)
Dr Margaret Maitland is senior curator of Ancient Mediterranean collections at National Museums Scotland. The Tomb: Ancient Egyptian Burial presents the story of one extraordinary tomb, built around 1290BC and reused for more than 1,000 years. The exhibition runs from 31 March until 3 September 2017 at the National Museum of Scotland and comes ahead of the opening of a new permanent Ancient Egypt gallery in 2018/19.

The Pyramids of Giza, Egypt. (Margaret Maitland)
Evidence of ancient Egyptian belief in life after death emerged as early as c4500 BC. Over the following millennia, the Egyptians’ preparations for eternal life changed significantly, with different styles of tombs, evolving mummification practices, and a wide variety of funerary objects.
Egypt’s dry climate, secure geographic position, and wealth of resources means that we have been left with an abundance of evidence about burial practices. In the early 19th century, Europe’s race to uncover ancient Egyptian mummies and treasures amounted to little better than tomb looting. However, recently the archaeological recording of burials and a systematic approach to their study have made it possible to make sense of Egypt’s changing funerary traditions.
An upcoming exhibition at the National Museum of Scotland, The Tomb: Ancient Egyptian Burial, charts the development of burial in ancient Egypt, and examines one of the first tombs to be excavated and recorded in detail: a tomb that was used and reused for more than 1,000 years.
An eternal holiday
The ancient Egyptians saw the afterlife as a potential extension of their lives on earth – but an idealised version, almost an eternal holiday. Preparations for the afterlife are first evidenced in prehistoric Egypt (c4500–3100 BC) by the placement in burials of pots containing food and drink for the deceased. Through this period an increasing number of provisions were placed alongside the dead, such as stone or pottery vessels, eye makeup palettes, flint tools, and beads.
In this early period, the dead were buried in pits, usually laid facing west, towards the setting sun. Throughout ancient Egyptian civilisation, the sun held particular importance. To reach the afterlife, the Egyptians hoped to join the sun god, who set every evening and was reborn at dawn each morning on his eternal journey. The pyramids built to house the tombs of Egyptian kings evoke the descending rays of the sun – a stairway to heaven – and were often given names with solar associations, such as ‘King Snefru Shines’. The Great Pyramid was named ‘The Horizon of King Khufu’. Some of the earliest detailed information about Egyptian beliefs comes from the first funerary texts, inscribed on the walls of the burial chambers in royal pyramids around 2400–2250 BC. These were magic spells intended to protect the king’s body, and to reanimate it after death in order to help him ascend to the heavens.
Some spells were intended to be recited at the funeral, while others were written in the first-person, to be spoken by the deceased king addressing the gods.
These pyramid texts reveal that, as well as wishing to join the sun god, the deceased hoped to become like the god Osiris. According to myth, Osiris was the first king of Egypt. He was the first person to be mummified and brought back to life after death, thus becoming ruler of the afterlife. This dual system of afterlife beliefs, relating to both solar and Osirian rebirth, was to characterise burial through the rest of ancient Egyptian history.

Clay statue of the afterlife god Osiris, from Umm el-Qaab, Abydos, c1295–1186 BC. (National Museums Scotland)
Inside the tomb
The tomb itself was intended help magically transform the dead into a semi-divine being like Osiris. Pyramids and royal tombs even had associated temples in which the king would be worshipped as a god after his death. Wealthy individuals had a decorated tomb-chapel – a public part of the tomb above ground, separate from the actual burial chambers – where relatives could visit to remember the deceased, and priests could make offerings and recite prayers.
In prehistoric burials, bodies were sometimes preserved naturally through the dry heat of the desert sand. The earliest active attempts to preserve human bodies involved wrapping the dead in resin-soaked linen, but the first real mummification took place during the pyramid age, or Old Kingdom (c2686–2134 BC). The body was dried using natron, a naturally occurring mixture of salts, and wrapped in linen. Sometimes the internal organs, the parts most likely to decay, were removed and mummified separately. The Egyptians believed that a person’s soul had the potential to leave the body to enjoy the afterlife, but it needed the preserved body as a resting place to return to each night.
During the Middle Kingdom (c2030–1800 BC), the funerary spells originally written inside royal pyramids began to appear on the coffins of priests and high officials. God of the afterlife Osiris also grew in importance. Rectangular coffins were slowly replaced by anthropoid coffins that depicted the deceased holding royal regalia, transformed into Osiris to ensure their resurrection. Wooden models depicting food and craft production were initially introduced to burials to magically provide for the dead and transfer their wealth and status into the afterlife. These developed into funerary figurines called shabtis, initially intended to act as a substitute for the deceased, in case they were called up by conscription to perform physical labour in the afterlife.

Wood anthropoid coffin of the estate overseer Khnumhotep, with gilded face, from Deir Rifa, Middle Kingdom c1940–1760 BC. (National Museums Scotland)
A major change in royal burial came In around 1530 BC, after a period of foreign occupation, when Egypt was reunified under the rule of a king from Thebes. Burial traditions in Thebes favoured rock-cut tombs in the cliffs on the west bank of the Nile, so royal pyramids were abandoned in favour of tombs hidden in the Valley of the Kings.
This period, the New Kingdom (c1550–1069 BC), marked the height of the ancient Egyptian empire. The civilisation was extremely wealthy; a fact reflected in the hundreds of tombs constructed by officials opposite the capital city of Thebes. In addition to magical items produced specifically for burial, such as coffins, canopic jars, and shabtis, the prosperous now wanted to take their wealth with them. They filled their tombs with all the beautiful things that they enjoyed in life, from jewellery to furniture. Another new innovations of this time was the Book of the Dead, a collection of magical spells developed out of the texts written inside royal pyramids almost 1,000 years earlier. For the first time, these spells were written and illustrated on papyrus scrolls, shrouds and amulets for the wealthy to take with them to the afterlife.

Clay shabtis and wooden shabti box, from Thebes (Luxor), c747–656 BC. (National Museums Scotland)
Our exhibition at the National Museum of Scotland focuses on a tomb built in the New Kingdom period for a chief of police and his wife. When it was excavated in 1857, detailed records and plans were made of the tomb’s layout and the objects found within it. It was enormous, carved 38m into the desert cliffs, followed by a shaft 6m deep leading to several burial chambers, making it larger than some of the royal tombs in the Valley of the Kings. Ironically, while the chief of police was in charge of the security in the Valley of the Kings, he wasn’t able to protect his own tomb, which was eventually robbed. The only surviving object from his burial is a beautiful pair statue [a statue depicting a couple].

Sandstone pair statue of the chief of police and his wife, c1290 BC. (National Museums Scotland)
A change in fortune
As Egypt lost control of its empire and became politically unstable (c1069–650 BC), the country fractured between the north and south and eventually succumbed to foreign rule. This change in fortune meant that people looked to cut costs in their burials – the people of Thebes could no longer afford to build new tombs and fill them with lavish burial goods. Since wood was scarce and expensive, a new form of mummy-case made from linen and plaster (cartonnage) was invented. Old tombs were reused and recycled. A number of objects excavated in the tomb featured in our exhibition reveal that it was reused by several individuals during at least two different time periods between 800 and 650 BC.
With widespread tomb looting and reuse, Egyptians worried that organs stored in canopic jars might become separated from the body. The integrity of the body was important, so internal organs were mummified individually and then returned to the body. Canopic jars were technically no longer needed, but some people still made solid dummy canopic jars for the sake of tradition and symbolic protection.
Objects from daily life were generally no longer placed in the tomb; instead the focus was entirely on magical items made specifically for burial. Originally just one single shabti figurine had been placed in the tomb, but this number quickly grew (as quality decreased significantly), eventually becoming a workforce for the afterlife.
Over time, funerary objects like shabtis and canopic jars began to disappear, and even coffins became rare. By the time Egypt became part of the Roman empire in 30 BC, burials were focused entirely on the body itself; the most commonly used burial items were shrouds and either a mask or classical-style portrait placed over the face of the mummified person. The realism of classical portraiture may have appealed to those wishing to preserve the body and bring the dead back to life: in a portrait, they appeared alive again. On the other hand, traditional practices invoked ancient magic: gilded mummy masks aimed to make the dead semi-divine, based on an age-old belief that the skin of the gods was made of gold.

Mummified man with a portrait-board fitted over the face, excavated at Hawara c80–120 AD. (National Museums Scotland)
The final reuse of the tomb featured in our exhibition was by an important Egyptian family who lived under the last pharaonic ruler Cleopatra and witnessed the conquest of Egypt by the first Roman emperor. The burials of the high-official Montsuef and his wife Tanuat can be dated specifically to 9 BC, by inscriptions on their funerary papyri. Unlike the earlier standardised Books of the Dead, these were personalised with vignettes about the couple, attesting to the virtuous lives they had led. The new Roman influences in this era are evident in the gold and copper wreath Montsuef’s body wore over a traditional gilded mask, a classical symbol of victory re-interpreted as a symbol of triumph over death.
Montsuef’s funerary canopy is an amazing object – completely unprecedented in the history of Egyptian burial. Yet other elements of it are entirely traditional, such as its Egyptian temple shape, royal cobras and winged sun-disk motifs. Like other objects from this period, it demonstrates how much Egypt was being transformed by external influence, but also just how determined the Egyptians were to hold onto their traditions in their pursuit of the afterlife. While shrouds decorated with ancient Egyptian symbols were produced until around the late third century AD, with the introduction of Christianity to Egypt, followed by Islam, burial practices that had lasted thousands of years were finally abandoned. Nevertheless, the ancient Egyptians still live on today, given eternal life through their extraordinary burial objects.

Wooden funerary canopy of Montsuef from Thebes, 9 BC. (National Museums Scotland)
Dr Margaret Maitland is senior curator of Ancient Mediterranean collections at National Museums Scotland. The Tomb: Ancient Egyptian Burial presents the story of one extraordinary tomb, built around 1290BC and reused for more than 1,000 years. The exhibition runs from 31 March until 3 September 2017 at the National Museum of Scotland and comes ahead of the opening of a new permanent Ancient Egypt gallery in 2018/19.
Published on February 01, 2017 02:30
January 31, 2017
Viking women: raiders, traders and settlers
History Extra
A detail of the decorative carving on the side of the Oseberg cart depicting a woman with streaming hair apparently restraining a man's sword-arm as he strikes at a horseman accompanied by a dog. Below is a frieze of intertwined serpentine beasts. c850 AD. Viking Ship Museum, Bygdoy. (Werner Forman Archive/Bridgeman Images)
In the long history of Scandinavia, the period that saw the greatest transformation was the Viking Age (AD 750–1100). This great movement of peoples from Denmark, Norway and Sweden, heading both east and west, and the social, religious, political and cultural changes that resulted from it, affected the lives and roles of women as well as the male warriors and traders whom we more usually associate with ‘Viking’ activities.
Scandinavian society was hierarchical, with slaves at the bottom, a large class of the free in the middle, and a top level of wealthy local and regional leaders, some of whom eventually sowed the seeds of the medieval Scandinavian monarchies. The slaves and the free lived a predominantly rural and agricultural life, while the upper levels of the hierarchy derived their wealth from the control and export of natural resources. The desire to increase such wealth was a major motivating factor of the Viking Age expansion.
One way to acquire wealth was to raid the richer countries to the south and west. The Scandinavians burst on the scene in the eighth century as feared warriors attacking parts of the British Isles and the European continent. Such raiding parties were mainly composed of men, but there is contemporary evidence from the Anglo-Saxon Chronicle that some warbands were accompanied by women and, inevitably, children, particularly if they had been away from Scandinavia for some time. The sources do not reveal whether these women had come from Scandinavia with the men or joined them along the way, but both are likely.
Despite popular misconceptions, however, there is no convincing evidence that women participated directly in fighting and raiding – indeed, the Chronicle describes women and children being put in a place of safety. The Viking mind, a product of a militarised culture, could certainly imagine women warriors, as evidenced by their art and mythology, but this was not an option for real women.
Settlement
In parts of Britain and Ireland and on the north-western coasts of the European continent, these raids were succeeded by settlement, as recorded in contemporary documents and shown by language, place-names and archaeology. An interesting question is whether these warriors-turned-settlers established families with Scandinavian wives or whether they integrated more quickly by taking local partners. There is strong evidence in many parts of Britain for whole communities speaking Scandinavian as the everyday language of the home, implying the presence of Scandinavian-speaking women.
Metal-detectorist finds show that Scandinavian-style jewellery was imported into England in considerable numbers on the clothing of the female settlers themselves. In the north and west of Scotland, Scandinavian speech and culture persisted well into the Middle Ages, and genetic evidence suggests that Scandinavian families and communities emigrated to places such as Orkney and Shetland en masse.
Not all of these settlers stayed in the British Isles, however. From both the Norwegian homeland and the British Isles, ambitious emigrants relocated to the previously uninhabited Faroe Islands and Iceland and, somewhat later, to Greenland and even North America. The Icelandic Book of Settlements, written in the 13th century, catalogues more than 400 Viking-Age settlers of Iceland. Thirteen of these were women, remembered as leading their households to a new life in a new land. One of them was Aud (also known as Unn), the deep-minded who, having lost her father, husband and son in the British Isles, took it on herself to have a ship built and move the rest of her household to Iceland, where the Book of the Icelanders lists her as one of the four most prominent Norwegian founders of Iceland. As a widow, Aud had complete control of the resources of her household and was wealthy enough to give both land and freedom to her slaves once they arrived in Iceland.
Slavery
Other women were, however, taken to Iceland as slaves. Studies of the genetic make-up of the modern Icelandic population have been taken to suggest that up to two-thirds of its female founding population had their origins in the British Isles, while only one third came from Norway (the situation is, however, the reverse for the male founding population).
The Norwegian women were the wives of leading Norwegian chieftains who established large estates in Iceland, while the British women may have been taken there as slaves to work on these estates. But the situation was complex and it is likely that some of the British women were wives legally married to Scandinavian men who had spent a generation or two in Scotland or Ireland before moving on to Iceland. The written sources also show that some of the British slaves taken to Iceland were male, as in the case of Aud, discussed above.
These pioneer women, both chieftains’ wives and slaves, endured the hardships of travel across the North Atlantic in an open boat along with their children and all their worldly goods, including their cows, sheep, horses and other animals. Women of all classes were essential to the setting up of new households in unknown and uninhabited territories, not only in Iceland but also in Greenland, where the Norse settlement lasted for around 500 years.
Both the Icelandic sagas and archaeology show that women participated in the voyages from Greenland to some parts of North America (known to them as Vinland). An Icelandic woman named Gudrid Thorbjarnardottir is particularly remembered for having been to Vinland (where she gave birth to a son), but also for having in later life gone on pilgrimage to Rome and become a nun.
A page from the 'Anglo-Saxon Chronicle' in Anglo-Saxon or Old English, telling of King Aethelred I of Wessex and his brother, the future King Alfred the Great, and their battles with the Viking invaders at Reading and Ashdown in AD 871 (c1950). (Photo by Hulton Archive/Getty Images)
Mistress of the house
These new settlements in the west were, like those of the homelands, rural and agricultural, and the roles of women were the traditional ones of running the household. Although free women did not by convention participate in public life, it is clear that the mistress of the household had absolute authority in the most important matters of daily life within its four walls, and also in many aspects of farm work.
Runic inscriptions show that women were highly valued for this. On a commemorative stone from Hassmyra, in Sweden, the deceased Odindisa is praised in runic verse by her husband for her role as mistress of that farm. The rune stones of the Isle of Man include a particularly high proportion commemorating women.
Those women who remained in Scandinavia also felt the effects of the Viking Age. The evidence of burials shows that women, at least those of the wealthier classes, were valued and respected. Much precious metalwork from the British Isles found its way into the graves of Norwegian women, usually presumed to be gifts from men who went west, but some were surely acquired abroad by the women themselves.
At the top level of society, women’s graves were as rich as those of men – the grave goods of a wealthy woman buried with her servant at Oseberg in Norway included a large and beautiful ship, several horses and many other valuable objects. Such a woman clearly had power and influence to be remembered in this way. With the establishment of the monarchies of Denmark, Norway and Sweden towards the end of the Viking Age, it is clear that queens could have significant influence, such as Astrid, the widow of St Olaf, who ensured the succession of her stepson Magnus to the Norwegian throne in the year 1035.
Urbanisation and Christianity
Another of the major changes affecting Viking Age Scandinavia was that of urbanisation. While trade had long been an important aspect of Scandinavian wealth creation, the development of emporia – town-like centres – such as Hedeby in Denmark, Kaupang in Norway, Birka in Sweden, or trading centres at York and Dublin, provided a new way of life for many.
The town-like centres plugged into wide-ranging international networks stretching both east and west. Towns brought many new opportunities for women as well as men in both craft and trade. The fairly standard layouts of the buildings in these towns show that the wealth-creating activities practised there were family affairs, involving the whole household, just as farming was in rural areas. In Russia, finds of scales and balances in Scandinavian female graves indicate that women took part in trading activities there.
Along with urbanisation, another major change was the adoption of Christianity. The rune stones of Scandinavia date largely from the period of Christianisation and show that women adopted the new religion with enthusiasm. As family memorials, the inscriptions are still dominated by men, but with Christianity, more and more of them were dedicated to women. Those associated with women were also more likely to be overtly Christian, such as a stone in memory of a certain Ingirun who went from central Sweden on pilgrimage to Jerusalem.
The Viking Age, then, was not just a masculine affair. Women played a full part in the voyages of raiding, trading and settlement, and the new settlements in the North Atlantic would not have survived very long had it not been for their contribution. This mobile and expansive age provided women with new opportunities, and sagas, rune stones and archaeology show that many of them were honoured and remembered accordingly.
Judith Jesch is professor of Viking studies at the University of Nottingham. Her publications include The Viking Diaspora (Routledge, 2015).

A detail of the decorative carving on the side of the Oseberg cart depicting a woman with streaming hair apparently restraining a man's sword-arm as he strikes at a horseman accompanied by a dog. Below is a frieze of intertwined serpentine beasts. c850 AD. Viking Ship Museum, Bygdoy. (Werner Forman Archive/Bridgeman Images)
In the long history of Scandinavia, the period that saw the greatest transformation was the Viking Age (AD 750–1100). This great movement of peoples from Denmark, Norway and Sweden, heading both east and west, and the social, religious, political and cultural changes that resulted from it, affected the lives and roles of women as well as the male warriors and traders whom we more usually associate with ‘Viking’ activities.
Scandinavian society was hierarchical, with slaves at the bottom, a large class of the free in the middle, and a top level of wealthy local and regional leaders, some of whom eventually sowed the seeds of the medieval Scandinavian monarchies. The slaves and the free lived a predominantly rural and agricultural life, while the upper levels of the hierarchy derived their wealth from the control and export of natural resources. The desire to increase such wealth was a major motivating factor of the Viking Age expansion.
One way to acquire wealth was to raid the richer countries to the south and west. The Scandinavians burst on the scene in the eighth century as feared warriors attacking parts of the British Isles and the European continent. Such raiding parties were mainly composed of men, but there is contemporary evidence from the Anglo-Saxon Chronicle that some warbands were accompanied by women and, inevitably, children, particularly if they had been away from Scandinavia for some time. The sources do not reveal whether these women had come from Scandinavia with the men or joined them along the way, but both are likely.
Despite popular misconceptions, however, there is no convincing evidence that women participated directly in fighting and raiding – indeed, the Chronicle describes women and children being put in a place of safety. The Viking mind, a product of a militarised culture, could certainly imagine women warriors, as evidenced by their art and mythology, but this was not an option for real women.
Settlement
In parts of Britain and Ireland and on the north-western coasts of the European continent, these raids were succeeded by settlement, as recorded in contemporary documents and shown by language, place-names and archaeology. An interesting question is whether these warriors-turned-settlers established families with Scandinavian wives or whether they integrated more quickly by taking local partners. There is strong evidence in many parts of Britain for whole communities speaking Scandinavian as the everyday language of the home, implying the presence of Scandinavian-speaking women.
Metal-detectorist finds show that Scandinavian-style jewellery was imported into England in considerable numbers on the clothing of the female settlers themselves. In the north and west of Scotland, Scandinavian speech and culture persisted well into the Middle Ages, and genetic evidence suggests that Scandinavian families and communities emigrated to places such as Orkney and Shetland en masse.
Not all of these settlers stayed in the British Isles, however. From both the Norwegian homeland and the British Isles, ambitious emigrants relocated to the previously uninhabited Faroe Islands and Iceland and, somewhat later, to Greenland and even North America. The Icelandic Book of Settlements, written in the 13th century, catalogues more than 400 Viking-Age settlers of Iceland. Thirteen of these were women, remembered as leading their households to a new life in a new land. One of them was Aud (also known as Unn), the deep-minded who, having lost her father, husband and son in the British Isles, took it on herself to have a ship built and move the rest of her household to Iceland, where the Book of the Icelanders lists her as one of the four most prominent Norwegian founders of Iceland. As a widow, Aud had complete control of the resources of her household and was wealthy enough to give both land and freedom to her slaves once they arrived in Iceland.
Slavery
Other women were, however, taken to Iceland as slaves. Studies of the genetic make-up of the modern Icelandic population have been taken to suggest that up to two-thirds of its female founding population had their origins in the British Isles, while only one third came from Norway (the situation is, however, the reverse for the male founding population).
The Norwegian women were the wives of leading Norwegian chieftains who established large estates in Iceland, while the British women may have been taken there as slaves to work on these estates. But the situation was complex and it is likely that some of the British women were wives legally married to Scandinavian men who had spent a generation or two in Scotland or Ireland before moving on to Iceland. The written sources also show that some of the British slaves taken to Iceland were male, as in the case of Aud, discussed above.
These pioneer women, both chieftains’ wives and slaves, endured the hardships of travel across the North Atlantic in an open boat along with their children and all their worldly goods, including their cows, sheep, horses and other animals. Women of all classes were essential to the setting up of new households in unknown and uninhabited territories, not only in Iceland but also in Greenland, where the Norse settlement lasted for around 500 years.
Both the Icelandic sagas and archaeology show that women participated in the voyages from Greenland to some parts of North America (known to them as Vinland). An Icelandic woman named Gudrid Thorbjarnardottir is particularly remembered for having been to Vinland (where she gave birth to a son), but also for having in later life gone on pilgrimage to Rome and become a nun.

A page from the 'Anglo-Saxon Chronicle' in Anglo-Saxon or Old English, telling of King Aethelred I of Wessex and his brother, the future King Alfred the Great, and their battles with the Viking invaders at Reading and Ashdown in AD 871 (c1950). (Photo by Hulton Archive/Getty Images)
Mistress of the house
These new settlements in the west were, like those of the homelands, rural and agricultural, and the roles of women were the traditional ones of running the household. Although free women did not by convention participate in public life, it is clear that the mistress of the household had absolute authority in the most important matters of daily life within its four walls, and also in many aspects of farm work.
Runic inscriptions show that women were highly valued for this. On a commemorative stone from Hassmyra, in Sweden, the deceased Odindisa is praised in runic verse by her husband for her role as mistress of that farm. The rune stones of the Isle of Man include a particularly high proportion commemorating women.
Those women who remained in Scandinavia also felt the effects of the Viking Age. The evidence of burials shows that women, at least those of the wealthier classes, were valued and respected. Much precious metalwork from the British Isles found its way into the graves of Norwegian women, usually presumed to be gifts from men who went west, but some were surely acquired abroad by the women themselves.
At the top level of society, women’s graves were as rich as those of men – the grave goods of a wealthy woman buried with her servant at Oseberg in Norway included a large and beautiful ship, several horses and many other valuable objects. Such a woman clearly had power and influence to be remembered in this way. With the establishment of the monarchies of Denmark, Norway and Sweden towards the end of the Viking Age, it is clear that queens could have significant influence, such as Astrid, the widow of St Olaf, who ensured the succession of her stepson Magnus to the Norwegian throne in the year 1035.
Urbanisation and Christianity
Another of the major changes affecting Viking Age Scandinavia was that of urbanisation. While trade had long been an important aspect of Scandinavian wealth creation, the development of emporia – town-like centres – such as Hedeby in Denmark, Kaupang in Norway, Birka in Sweden, or trading centres at York and Dublin, provided a new way of life for many.
The town-like centres plugged into wide-ranging international networks stretching both east and west. Towns brought many new opportunities for women as well as men in both craft and trade. The fairly standard layouts of the buildings in these towns show that the wealth-creating activities practised there were family affairs, involving the whole household, just as farming was in rural areas. In Russia, finds of scales and balances in Scandinavian female graves indicate that women took part in trading activities there.
Along with urbanisation, another major change was the adoption of Christianity. The rune stones of Scandinavia date largely from the period of Christianisation and show that women adopted the new religion with enthusiasm. As family memorials, the inscriptions are still dominated by men, but with Christianity, more and more of them were dedicated to women. Those associated with women were also more likely to be overtly Christian, such as a stone in memory of a certain Ingirun who went from central Sweden on pilgrimage to Jerusalem.
The Viking Age, then, was not just a masculine affair. Women played a full part in the voyages of raiding, trading and settlement, and the new settlements in the North Atlantic would not have survived very long had it not been for their contribution. This mobile and expansive age provided women with new opportunities, and sagas, rune stones and archaeology show that many of them were honoured and remembered accordingly.
Judith Jesch is professor of Viking studies at the University of Nottingham. Her publications include The Viking Diaspora (Routledge, 2015).
Published on January 31, 2017 02:00
January 30, 2017
8 ancient Egyptian gods and goddesses that you (probably) didn’t know about
History Extra
Ancient Egyptian papyrus, 11th-10th century BC, depicting the scarab-headed god Khepri, a divine version of the humble scarab beetle. (Photo by Art Media/Print Collector/Getty Images)
1. Taweret
At first sight the goddess Taweret, ‘the great female one’, appears to be composed of randomly selected animal parts. She has the body and head of a pregnant hippopotamus standing on its hind legs, the tail of a crocodile, and the limbs of a lioness – topped, occasionally, by a woman’s face. Her mouth lolls open to reveal rows of dangerous-looking teeth, and she often wears a long wig. We might find this combination of fierce animals and false hair frightening, but the women of ancient Egypt regarded Taweret as a great comfort, as she was able to protect them during childbirth by scaring away the evil spirits who might harm either the mother or the baby. This made her extremely popular so that, although she did not have a grand temple, her image was displayed on walls, beds, headrests and cosmetic jars in many private homes, and she even appears on palace walls.
The same assortment of animal parts – this time the head of a crocodile, the foreparts and body of a lion or leopard and the hind parts of a hippopotamus – can be found in Ammit, the ‘eater of the damned’. Unlike Taweret, Ammit was greatly feared. She lived in the kingdom of the dead where she squatted beside the scales used during the ‘weighing of the heart’, a ceremony that saw the heart of the deceased being weighed against the feather of truth. Those whose hearts proved light were allowed to pass into the afterlife. Hearts that weighed heavy against the feather were eaten by Ammit.
2. Bes
Bes was another god who brought comfort and protection to mothers and children. A part-comical, part-sinister dwarf with a plump body, prominent breasts, bearded face, flat nose and protruding tongue, Bes might be either fully human, or half-human, half-animal (usually lion). He might have a mane, a lion’s tail, or wings. He often wears a plumed headdress and carries either a drum or tambourine, or a knife.
Bes offered a welcome protection against snakes. But his primary role was as a dancer and musician who used his art to frighten away bad spirits during the dangerous times of childbirth, childhood, sex and sleep. His image decorated bedrooms of all classes, and we can also see him, either tattooed or painted, on the upper thigh of dancing girls.
A relief depicting the god Bes on a facade of the Small Temple of Hathor, Abu Simbel. His primary role was as a dancer and musician who used his art to frighten away bad spirits during the dangerous time of childbirth. (Photo by DeAgostini/Getty Images)
3. Neith
Neith is a warrior or a hunter. Human-form and bald, she wears a crown and carries a bow and arrows. Her linen sheath dress is so tight that, in an age before lycra, she would have had difficulty moving around the battlefield. Her title ‘mother of the gods’ identifies her with the creative force present at the beginning of the world, and it is possible that she is credited with inventing childbirth. On the wall of the temple of Khnum at Esna, in southern Egypt, we see Neith emerging from the primeval waters as a cow-goddess who creates land by simply saying the words: “Let this place be land for me.”
Neith was worshipped throughout Egypt, but was particularly associated with the western Delta town of Sais (modern Sa el Hagar) where her temple became known as the ‘house of the bee’. During the 26th dynasty (664-525 BCE), a time when Sais was Egypt’s capital city, she became the dominant state god, and the kings were buried in the grounds of her temple. Her temple and the royal tombs that it contained are now lost.
4. The Aten
If Taweret and Ammit seem to have too many body parts, the god known simply as the Aten, or ‘the sun disk’, does not seem to have enough. The Aten is a bodiless, faceless sun that emits long rays tipped with tiny hands. He hangs in the sky above the royal family, offering them the ankh, symbol of life. As he has no known mythology, we can say very little more about him.
This apparently dull deity inspired such devotion in Pharaoh Akhanaten (ruled c1352–1336 BC) that he abandoned the traditional gods, closed their temples and built a new capital city which he named ‘Horizon of the Aten’ (modern Amarna), dedicated to the Aten. Had a private citizen decided to worship just one god, there would have been no problem. But Akhenaten, as pharaoh, was expected to make offerings to all of Egypt’s gods. His decision to abandon the traditional rituals was seen as very dangerous –surely the old gods would get angry? Not long after his death, the pantheon was restored by Tutankhamen (ruled c1336–1327 BC). As the old temples re-opened, the Aten sank back into obscurity.
5. Sekhmet
Many of us are familiar with Hathor, the gentle cow-headed sky goddess associated with motherhood, nurturing and drunkenness. Few of us realise that Hathor has an alter ego. When angry, she transforms into the Sekhmet, ‘the powerful one’, an uncompromising, fire-breathing lioness armed with an arsenal of pestilences and plagues and the ability to burn Egypt’s enemies with the fierce heat of the sun. Sekhmet was a ruthless defender of her father the pharaoh and this, together with her skill with a bow and arrow, caused her to become closely associated with the army. When the sun god, Re, learned that the people of Egypt were plotting against him, he sent Sekhmet to kill them all. When he changed his mind, and determined to save the people, he had a lot of trouble stopping the killing. Sekhmet was not entirely vicious, however. As ‘mistress of life’, she could cure all the ills that she inflicted, and her priests were recognised as healers with a powerful magic.
From The Book of the Dead of Userhetmos, the dead woman prays to the hippopotamus goddess Taweret. (Photo by Werner Forman/Universal Images Group/Getty Images)
6. Khepri
Khepri, ‘the one who comes into being’, is the morning sun. He is usually shown in the form of a beetle, although he might also be a beetle-headed man, or a beetle-headed falcon. He is a divine version of the humble scarab beetle whose habit of pushing around a large ball of dung made the ancients imagine a huge celestial beetle rolling the ball of the sun across the sky.
Hidden within the scarab beetle’s dung ball were eggs that eventually hatched, crawled out of the ball and flew away. Observing this, the Egyptians jumped to the conclusion that beetles were male beings capable of self-creation. This enviable ability to regenerate made the scarab one of Egypt’s most popular amulets, used by both the dead and the living. Although Khepri did not have a temple, he was often depicted alongside Egypt’s other gods in the royal tombs in the Valley of the Kings.
7. Renenutet
Renenutet was a cobra goddess. The Egyptian cobra can grow to be nine feet long and can, when angry or threatened, raise a third of its body from the ground, and expand its ‘hood’ (cervical ribs). This made the female cobra a useful royal bodyguard. A rearing cobra (the uraeus) was worn on the royal brow; cobra amulets were included in mummy wrapping to protect the dead; and a painted pottery cobra, placed in the corner of a room, was known to be an effective means of warding off evil ghosts and spirits.
Every year the river Nile flooded in late summer. The rising waters caused an increase in the number of snakes attracted to the settlements by the vermin flushed from the low-lying ground. This caused the cobra to be associated with the fertility of the Nile. Renenutet, ‘she who nourishes’, lived in the fertile fields where, as goddess of the harvest and granaries, she ensured that Egypt would not go hungry. Cobras were considered exceptionally good mothers, and Renenutet was no exception. As a divine nurse she suckled the king; as a fire-breathing cobra she protected him in death.
8. Geb
In most mythologies, the fertile earth is classed as female. In ancient Egypt, however, the earth was male. Geb was an ancient and important earth god who represented both the fertile land and the graves dug into that land. For this combination of attributes, and for his prowess as a healer, he was both respected and feared. He usually appears as a reclining man beneath the female sky. His naked green body often shows signs of his impressive fertility, and he may have grain growing from his back. Alternatively, he might appear as a king wearing a crown. In animal form, Geb might be a goose (or a man wearing a goose on his head) or a hare, or he might form part of the crew of the sun boat that sails across the sky each day.
Geb ruled Egypt during the time when people and gods lived together. Later, Greek tradition would equate Geb with the Titan Chronos, who overthrew his father Uranus at the urging of his mother, Gaia.
Joyce Tyldesley teaches a suite of online courses in Egyptology at the University of Manchester. She is the author of Myths and Legends of Ancient Egypt (Viking Penguin 2010).

Ancient Egyptian papyrus, 11th-10th century BC, depicting the scarab-headed god Khepri, a divine version of the humble scarab beetle. (Photo by Art Media/Print Collector/Getty Images)
1. Taweret
At first sight the goddess Taweret, ‘the great female one’, appears to be composed of randomly selected animal parts. She has the body and head of a pregnant hippopotamus standing on its hind legs, the tail of a crocodile, and the limbs of a lioness – topped, occasionally, by a woman’s face. Her mouth lolls open to reveal rows of dangerous-looking teeth, and she often wears a long wig. We might find this combination of fierce animals and false hair frightening, but the women of ancient Egypt regarded Taweret as a great comfort, as she was able to protect them during childbirth by scaring away the evil spirits who might harm either the mother or the baby. This made her extremely popular so that, although she did not have a grand temple, her image was displayed on walls, beds, headrests and cosmetic jars in many private homes, and she even appears on palace walls.
The same assortment of animal parts – this time the head of a crocodile, the foreparts and body of a lion or leopard and the hind parts of a hippopotamus – can be found in Ammit, the ‘eater of the damned’. Unlike Taweret, Ammit was greatly feared. She lived in the kingdom of the dead where she squatted beside the scales used during the ‘weighing of the heart’, a ceremony that saw the heart of the deceased being weighed against the feather of truth. Those whose hearts proved light were allowed to pass into the afterlife. Hearts that weighed heavy against the feather were eaten by Ammit.
2. Bes
Bes was another god who brought comfort and protection to mothers and children. A part-comical, part-sinister dwarf with a plump body, prominent breasts, bearded face, flat nose and protruding tongue, Bes might be either fully human, or half-human, half-animal (usually lion). He might have a mane, a lion’s tail, or wings. He often wears a plumed headdress and carries either a drum or tambourine, or a knife.
Bes offered a welcome protection against snakes. But his primary role was as a dancer and musician who used his art to frighten away bad spirits during the dangerous times of childbirth, childhood, sex and sleep. His image decorated bedrooms of all classes, and we can also see him, either tattooed or painted, on the upper thigh of dancing girls.

A relief depicting the god Bes on a facade of the Small Temple of Hathor, Abu Simbel. His primary role was as a dancer and musician who used his art to frighten away bad spirits during the dangerous time of childbirth. (Photo by DeAgostini/Getty Images)
3. Neith
Neith is a warrior or a hunter. Human-form and bald, she wears a crown and carries a bow and arrows. Her linen sheath dress is so tight that, in an age before lycra, she would have had difficulty moving around the battlefield. Her title ‘mother of the gods’ identifies her with the creative force present at the beginning of the world, and it is possible that she is credited with inventing childbirth. On the wall of the temple of Khnum at Esna, in southern Egypt, we see Neith emerging from the primeval waters as a cow-goddess who creates land by simply saying the words: “Let this place be land for me.”
Neith was worshipped throughout Egypt, but was particularly associated with the western Delta town of Sais (modern Sa el Hagar) where her temple became known as the ‘house of the bee’. During the 26th dynasty (664-525 BCE), a time when Sais was Egypt’s capital city, she became the dominant state god, and the kings were buried in the grounds of her temple. Her temple and the royal tombs that it contained are now lost.
4. The Aten
If Taweret and Ammit seem to have too many body parts, the god known simply as the Aten, or ‘the sun disk’, does not seem to have enough. The Aten is a bodiless, faceless sun that emits long rays tipped with tiny hands. He hangs in the sky above the royal family, offering them the ankh, symbol of life. As he has no known mythology, we can say very little more about him.
This apparently dull deity inspired such devotion in Pharaoh Akhanaten (ruled c1352–1336 BC) that he abandoned the traditional gods, closed their temples and built a new capital city which he named ‘Horizon of the Aten’ (modern Amarna), dedicated to the Aten. Had a private citizen decided to worship just one god, there would have been no problem. But Akhenaten, as pharaoh, was expected to make offerings to all of Egypt’s gods. His decision to abandon the traditional rituals was seen as very dangerous –surely the old gods would get angry? Not long after his death, the pantheon was restored by Tutankhamen (ruled c1336–1327 BC). As the old temples re-opened, the Aten sank back into obscurity.
5. Sekhmet
Many of us are familiar with Hathor, the gentle cow-headed sky goddess associated with motherhood, nurturing and drunkenness. Few of us realise that Hathor has an alter ego. When angry, she transforms into the Sekhmet, ‘the powerful one’, an uncompromising, fire-breathing lioness armed with an arsenal of pestilences and plagues and the ability to burn Egypt’s enemies with the fierce heat of the sun. Sekhmet was a ruthless defender of her father the pharaoh and this, together with her skill with a bow and arrow, caused her to become closely associated with the army. When the sun god, Re, learned that the people of Egypt were plotting against him, he sent Sekhmet to kill them all. When he changed his mind, and determined to save the people, he had a lot of trouble stopping the killing. Sekhmet was not entirely vicious, however. As ‘mistress of life’, she could cure all the ills that she inflicted, and her priests were recognised as healers with a powerful magic.

From The Book of the Dead of Userhetmos, the dead woman prays to the hippopotamus goddess Taweret. (Photo by Werner Forman/Universal Images Group/Getty Images)
6. Khepri
Khepri, ‘the one who comes into being’, is the morning sun. He is usually shown in the form of a beetle, although he might also be a beetle-headed man, or a beetle-headed falcon. He is a divine version of the humble scarab beetle whose habit of pushing around a large ball of dung made the ancients imagine a huge celestial beetle rolling the ball of the sun across the sky.
Hidden within the scarab beetle’s dung ball were eggs that eventually hatched, crawled out of the ball and flew away. Observing this, the Egyptians jumped to the conclusion that beetles were male beings capable of self-creation. This enviable ability to regenerate made the scarab one of Egypt’s most popular amulets, used by both the dead and the living. Although Khepri did not have a temple, he was often depicted alongside Egypt’s other gods in the royal tombs in the Valley of the Kings.
7. Renenutet
Renenutet was a cobra goddess. The Egyptian cobra can grow to be nine feet long and can, when angry or threatened, raise a third of its body from the ground, and expand its ‘hood’ (cervical ribs). This made the female cobra a useful royal bodyguard. A rearing cobra (the uraeus) was worn on the royal brow; cobra amulets were included in mummy wrapping to protect the dead; and a painted pottery cobra, placed in the corner of a room, was known to be an effective means of warding off evil ghosts and spirits.
Every year the river Nile flooded in late summer. The rising waters caused an increase in the number of snakes attracted to the settlements by the vermin flushed from the low-lying ground. This caused the cobra to be associated with the fertility of the Nile. Renenutet, ‘she who nourishes’, lived in the fertile fields where, as goddess of the harvest and granaries, she ensured that Egypt would not go hungry. Cobras were considered exceptionally good mothers, and Renenutet was no exception. As a divine nurse she suckled the king; as a fire-breathing cobra she protected him in death.
8. Geb
In most mythologies, the fertile earth is classed as female. In ancient Egypt, however, the earth was male. Geb was an ancient and important earth god who represented both the fertile land and the graves dug into that land. For this combination of attributes, and for his prowess as a healer, he was both respected and feared. He usually appears as a reclining man beneath the female sky. His naked green body often shows signs of his impressive fertility, and he may have grain growing from his back. Alternatively, he might appear as a king wearing a crown. In animal form, Geb might be a goose (or a man wearing a goose on his head) or a hare, or he might form part of the crew of the sun boat that sails across the sky each day.
Geb ruled Egypt during the time when people and gods lived together. Later, Greek tradition would equate Geb with the Titan Chronos, who overthrew his father Uranus at the urging of his mother, Gaia.
Joyce Tyldesley teaches a suite of online courses in Egyptology at the University of Manchester. She is the author of Myths and Legends of Ancient Egypt (Viking Penguin 2010).
Published on January 30, 2017 03:30
January 29, 2017
Spotlight on Karenne Griffin

Karenne Griffin
Some interesting facts you may not know:
Karenne kept lizards in her childhood. The largest were a Bearded Dragon and a Skink, but there were plenty of little ones also.

And why did Karenne choose these particular lizards? Well, she grew up in Australia.

Part of fact number two isn’t true as Karenne has never really grown up. She still goes to gigs and acts like a teenager given half the chance.

Karenne has written two travel books as well as four novels. She’s currently working on two more novels. All books are available from Amazon in paperback as well as electronic versions on Kindle.
US Amazon Link
UK Amazon Link

Although she’s never been to the USA, Karenne contributed to The Dark Dozen: Stories for Scarborough. This is a fundraising effort to give Al Scarborough the life-saving medical treatment he needs. This collection of twelve rather dark tales is also available from Amazon. Please dig deep folks, to help a worthy cause. Donations, no matter how small, can also be made to Al's GoFundMePage

Visit Karenne's Webpage

Karenne on FacebookKarenne on Twitter
Published on January 29, 2017 03:00