A Historical Case For Why The EU Could Endure For More Than 1,000 Years

06-18-2024 ~ Empires historically possessed a unique ability to organize diversity and successfully rule over people quite different from one another. The national states that succeeded them in Europe were organized on the basis of nationality (real or imagined) where diversity in language, ethnicity, or religion was deemed a barrier to national unity. The negative consequences of celebrating unique national identities became clear after virulent nationalism led to the rise of Nazi Germany and World War II left Europe in ruins.

The European Union seeks to prevent this from happening again by restoring the cosmopolitanism, size, and economic clout of a multinational empire but without creating a unitary state in the process.

The EU is designed as a supranational polity where member states retain their national sovereignty and parochial identities but agree to be bound by its laws. They maintain their own armies, independent foreign policies, national parliaments and are free to leave if they so choose. The creation of such a chimeric political beast might appear unpreceded but it is not. The Holy Roman Empire successfully managed the affairs of central Europe by creating a similar composite political structure beginning in 962 under Otto I. In a bid to create unity in the German lands, where the Romans never ruled, the Holy Roman Empire proclaimed itself the successor to the long-dead Roman Empire as the defender of a Catholic Christian Europe. Lacking a single capital and imperial army, it did not seem to be much of an empire. However, its court system resolved disputes between member states, protecting the empire’s free cities and smaller estates (300+) against the more powerful ones. More unusually, it allowed individuals to seek redressals against local rulers who had violated their rights. Its Reichstag or Diet(legislature) met regularly to pass laws that were binding on all members and its emperorship was an elective position. That emperor’s power was limited because the estates within the empire remained responsible for governing their own territories. They were free to make alliances with outside powers as long as they were not detrimental to the empire or posed a danger to public peace.

That the Holy Roman Empire survived for 900 years suggests that it was on to something, and that the EU can be viewed as a secular reincarnation of it—minus only an emperor’s crown and Christian faith as symbols of its unity. However, other than making Charlemagne a symbol of European unity, it is a legacy that is largely unacknowledged and perhaps for good reason. Both the largely forgotten and derided Holy Roman Empire and the EU faced similar structural complexities and solved them in similar ways but the EU’s project was far more substantial—creating a polity that is now the world’s third-largest economy with a population of around 450 million people divided among 27 sovereign nations, which are spread across more than 4.2 million square kilometers.

Map of Post-Brexit European Union
The EU emerged as a post-World War II aspirational project designed to avoid the conflicts that had devastated the continent for centuries, first by integrating Western Europe into a single market and then creating a common political structure to administer it without eliminating the nationally sovereign governments of its member states. It then expanded into Central and Eastern Europe after the dissolution of the Soviet Union in 1991. Intellectually, the EU rooted itself in the idea that there was a common European culture that transcended the continent’s many different languages, national divisions, and religions. It would be a union with a formidable regulatory bureaucracy and court system but one without a military or single capital. Imaginary European-style buildings and bridges illustrated its euro banknotes to avoid having to choose among real ones.

Because a problem solved is a problem soon forgotten, it is barely remembered that the initial rationale for the nascent union was to end a Franco-German hostility that had led to three consecutive conflicts in 70 years: the Franco-Prussian War (1870-1871), World War I (1914-1918), and World War II (1939-1945). This union began with the integration of coal and steel industries in 1951 and took political shape in 1957 when the Treaty of Rome created a European Economic Community that was more formally merged in 1965. Its six founding members (Belgium, France, Luxembourg, Italy, the Netherlands, and West Germany) occupied the same territories as Charlemagne’s Carolingian Empire had during the 8th and 9th centuries, the only previous time that the German and French worlds shared a single governing institution with common cultural aspirations.

The new entity had no single capital but instead split its institutions among the cities of Brussels, Strasbourg, and Luxembourg. They were all nondescript places located in borderlands that the French and Germans had long fought over and lacked any taint of invidious past glory that characterized Paris, Rome, or a then-divided Berlin. By the 1970s, both Germany and France had so thoroughly embraced the integration of the community’s economies that any notion that it was designed to reduce hostility between them was relegated to a historical footnote. Their common vision now was to expand into other parts of Europe and create a peer European entity that could compete more equally with the U.S. and the Soviet Union, something no European state could hope to achieve alone.

Enlargement began with the addition of Great Britain, Ireland, and Denmark (1973), Greece (1981), and then Spain and Portugal (1986). In 1993, it was reorganized into the European Union before adding Sweden, Finland, and Austria in 1995. The withdrawal of Soviet troops from central Europe allowed Germany to reunite in 1990 and the collapse of the Soviet Union itself led to a further wave of new members in 2004: Estonia, Latvia, Lithuania, Poland, the Czech Republic, Cyprus, Malta, Slovakia, Hungary, and Slovenia. These were followed by Romania and Bulgaria in 2007, and Croatia in 2013. (Great Britain, after an unexpected negative referendum vote in 2016, left the EU in 2020.) But the successful establishment of the EU also depended on a fortuitous alignment in world politics. The U.S., following its maritime empire alliance template, was supportive of the European unity and the creation of a new economic bloc as big as its own. It not only did not hinder its emergence but the U.S. also provided the EU with a security framework through the NATO military alliance that left it free to focus entirely on its economic affairs. The Soviet Union, by contrast, had ensured that not even a modicum of economic or political autonomy would emerge in the areas it occupied—the default position historically when large states were in a position to dominate smaller ones.

In his excoriating 1667 book, De Statu Imperii Germanici, which saw more than 100,000 copies printed, Samuel Pufendorf had declared the Holy Roman (or German) Empire a “misshaped monster” because it lacked sovereignty over its component states. It also lacked a capital city and an army that emerging national states (and all other types of empires) deemed foundational. It turns out that this structure, so ill-adapted to a world of emerging nation-states that drove it to extinction, was perfectly designed for a union of sovereign states that governed as a supranational polity largely through administrative regulation and a rules-enforcing judicial system. The EU had its own revenue stream that included customs receipts, required member contributions, and a percentage of nationally assessed value-added taxes. The EU Parliament resembled the representative Diets of the Holy Roman Empire in their multiple meeting places and the disconnection between that body and a weak executive. While the Holy Roman Emperor was faulted for his inability to command the obedience of member states, the EU refused even to create a single chief executive officer. Instead, it had three separate presidencies whose priority depended on the issues involved: a rotating council of ministers’ presidency filled by a different member country every six months, a president of the European Commission, and a president of the EU Parliament.

A similarly decentralized Holy Roman Empire lasted close to a millennium in the German world and northern Italy but stood in sharp contrast to the centralized systems of government that developed in countries such as France or Britain where the state’s chief executive was the most important player. One reason for the lack of focus on an executive office in the EU was that it had no military forces to command. Defense responsibilities rested with national states themselves and with NATO on a Europe-wide basis. While the EU’s major players were members of the NATO alliance during the Cold War, smaller states with nonaligned policies were not. And while NATO had its large headquarters in Brussels and defending Europe was its core mission, it was distinct from the EU because it had many non-EU partners, including the U.S.—its dominant member—Canada, Iceland, Norway, and Turkey. When the EU expanded eastward in 1994, all of its new member states sought to join the NATO alliance to protect themselves from possible Russian aggression.

There were of course significant differences between the EU and its Holy Roman predecessor, the most distinctive of which was that the EU was resolutely secular. Its concept of European unity was cultural and economic rather than religious, but resistance to accepting Muslim Turkey as European enough to join the union demonstrated the persisting legacy of Europe’s long Christian history. Nor was the EU rooted in a nostalgia for a past that had most recently produced violent nationalism, which left scores of millions of people dead in World War II. Seeking to end such violence demanded new ways of thinking and future orientation. Appeals to nostalgia were left to anti-EU nationalist parties fighting a rearguard battle to dissolve it in order to make their own countries great again. It was an ambition that became ever more difficult to achieve as the majority of Europe’s population could no longer imagine a world in which it did not exist.

Most of the EU’s income was redistributed through subsidies and capital investments tilted toward its poorer members, giving it the power to curb uncooperative members if they refused to recognize the supremacy of EU law. If they needed any lessons on the consequences of breaking away, they had only to observe the ongoing turmoil that Britain endured leaving the EU that looked less likely to restore its former glory and more likely to lead to its own dissolution. In its wake, Scotland renewed its push for independence to rejoin, and the once-unthinkable prospect that the people of Northern Ireland might choose to reunify with EU member Ireland rather than stick with Britain became a distinct possibility. But perhaps the greatest difference between the Holy Roman Empire and any other empire was that the EU was a product of voluntary alliance and treaty-making and not a result of wars of conquest. Empires may have improved the lives of those who lived in them as they evolved, but these benefits were appreciated only by the descendants of those who survived their violent formation. The EU’s attempt to recreate the advantages of a multinational empire without its brutality or aggressiveness—a caffeine-free espresso if you will—sets it apart from the U.S., China, and Russia, which have not shaken off that habit. Whether it is a model that will have the longevity of the Holy Roman Empire remains to be seen.

By Thomas J. Barfield

Author Bio: Thomas J. Barfield is professor of anthropology at Boston University. His new book, Shadow Empires, explores how distinctly different types of empires arose and sustained themselves as the dominant polities of Eurasia and North Africa for 2,500 years before disappearing in the 20th century. He is a renowned historian of Central Eurasia and the author of The Central Asian Arabs of Afghanistan, The Perilous Frontier: Nomadic Empires and China, 221 BC to AD 1757, Afghanistan: An Atlas of Indigenous Domestic Architecture, and Afghanistan: A Cultural and Political History (revised and expanded second edition 2022, Princeton University Press).

Source: Human Bridges

Credit Line: This article is distributed by Human Bridges, first published in the Sentinel Post.

Migrating Workers Provide Wealth For The World

Vijay Prashad

06-18-2024 ~ Each year, the International Organization for Migration (IOM) releases its World Migration Report. Most of these reports are anodyne, pointing to a secular rise in migration during the period of neoliberalism. As states in the poorer parts of the world found themselves under assault from the Washington Consensus (cuts, privatization, and austerity), and as employment became more and more precarious, larger and larger numbers of people took to the road to find a way to sustain their families. That is why the IOM published its first World Migration Report in 2000, when it wrote that “it is estimated that there are more migrants in the world than ever before,” it was between 1985 and 1990, the IOM calculated, that the rate of growth of world migration (2.59 percent) outstripped the rate of growth of the world population (1.7 percent).

The neoliberal attack on government expenditure in poorer countries was a key driver of international migration. Even by 1990, it had become clear that the migrants had become an essential force in providing foreign exchange to their countries through increasing remittance payments to their families. By 2015, remittances—mostly by the international working class—outstripped the volume of Official Development Assistance (ODA) by three times and Foreign Direct Investment (FDI). ODA is the aid money provided by states, whereas FDI is the investment money provided by private companies. For some countries, such as Mexico and the Philippines, remittance payments from working-class migrants prevented state bankruptcy.

This year’s report notes that there are “roughly 281 million people worldwide” who are on the move. This is 3.6 percent of the global population. It is triple the 84 million people on the move in 1970, and much higher than the 153 million people in 1990. “Global trends point to more migration in the future,” notes the IOM. Based on detailed studies, the IOM finds that the rise in migration can be attributed to three factors: war, economic precarity, and climate change.

First, people flee war, and with the increase in warfare, this has become a leading cause of displacement. Wars are not the result of human disagreement alone, since many of these problems can be resolved if calm heads are allowed to prevail; conflicts are exacerbated into war due to the immense scale of the arms trade and the pressures of the merchants of death to forgo peace initiatives and to use increasingly expensive weaponry to solve disputes. Global military spending is now nearly $3 trillion, three-quarters of it by the Global North countries. Meanwhile, arms companies made a whopping $600 billion in profits in 2022. Tens of millions of people are permanently displaced due to this profiteering by the merchants of death.

Second, the International Labor Organization (ILO) calculates that about 58 percent of the global workforce—or 2 billion people—are in the informal sector. They work with minimal social protection and almost no rights in the workplace. The data on youth unemployment and youth precarity is stunning, with the Indian numbers horrifying. The Centre for Monitoring Indian Economy shows that India’s youth—between the ages of 15 and 24—are “faced by a double whammy of low and falling labor participation rates and shockingly high unemployment rates. The unemployment rate among youth stood at 45.4 percent in 2022-23. This is an alarming six times higher than India’s unemployment rate of 7.5 percent.” Many of the migrants from West Africa who attempt the dangerous crossing of the Sahara Desert and the Mediterranean Sea flee the high rates of precarity, underemployment, and unemployment in the region. A 2018 report from the African Development Bank Group shows that due to the attack on global agriculture, peasants have moved from rural areas to cities into low-productivity informal services, from where they decide to leave for the lure of higher incomes in the West.

Third, more and more people are faced with the adverse impacts of the climate catastrophe. In 2015, at the Paris meeting on the climate, government leaders agreed to set up a Task Force on Climate Migration; three years later, in 2018, the UN Global Compact agreed that those on the move for reasons of climate degradation must be protected. However, the concept of “climate refugees” is not yet established. In 2021, a World Bank report calculated that by 2050 there will be at least 216 million climate refugees.

The IOM’s new report points out that these migrants—many of whom lead extremely precarious lives—send home larger and larger amounts of money to help their increasingly desperate families. “The money they send home,” the IOM report notes, “increased by a staggering 650 [percent] during the period from 2000 to 2022, rising from $128 billion to $831 billion.” Most of these remittances in the recent period, analysts show go to low-income and middle-income countries. Of the $831 billion, for instance, $647 billion goes to poorer nations. For most of these countries, the remittances sent home by working-class migrants far outstrips FDI and ODA put together and forms a significant portion of the Gross Domestic Product (GDP).

A number of studies conducted by the World Bank show two important things about remittance payments. First, these are more evenly distributed amongst the poorer nations. FDI transactions typically favor the largest economies in the Global South, and they go toward sectors that are not always going to provide employment or income for the poorest sections of the population. Second, household surveys show that these remittances help to considerably lower poverty in middle-income and low-income countries. For example, remittance payments by working-class migrants reduced the rate of poverty in Ghana (by 5 percent), in Bangladesh (by 6 percent), and in Uganda (by 11 percent). Countries such as Mexico and the Philippines see their poverty rates rise drastically when remittances drop.

The treatment of these migrants, who are crucial for poverty reduction and for building wealth in society, is outrageous. They are treated as criminals, abandoned by their own countries who would rather spend vulgar amounts of money to attract much less impactful investment through multinational corporations. The data shows that there needs to be a shift in class perspective regarding investment. Migrant remittances are greater by volume and more impactful for society than the “hot money” that goes in and out of countries and does not “trickle down” into society.

If the migrants of the world—all 281 million of them—lived in one country, then they would form the fourth largest country in the world after India (1.4 billion), China (1.4 billion), and the United States (339 million). Yet, migrants receive few social protections and little respect (a new publication from the Zetkin Forum for Social Research shows, for instance, how Europe criminalizes migrants). In many cases, their wages are suppressed due to their lack of documentation, and their remittances are taxed heavily by international wire services (PayPal, Western Union, and Moneygram) which charge high fees to both the sender and the recipient. As yet, there are only small political initiatives that stand with the migrants, but no platform that unites their numbers into a powerful political force.

By Vijay Prashad

Author Bio: This article was produced by Globetrotter. Vijay Prashad is an Indian historian, editor, and journalist. He is a writing fellow and chief correspondent at Globetrotter. He is an editor of LeftWord Books and the director of Tricontinental: Institute for Social Research. He has written more than 20 books, including The Darker Nations and The Poorer Nations. His latest books are Struggle Makes Us Human: Learning from Movements for Socialism and (with Noam Chomsky) The Withdrawal: Iraq, Libya, Afghanistan, and the Fragility of U.S. Power.

Source: Globetrotter


Can Teething Predict How Fast You Will Grow?

Brenna Hassett – Photo: en.wikipedia.org

06-17-2024 ~ We know that humans live relatively long lives, and we certainly know that we spend a larger proportion of those lives as children than other species. The question remains: how did we manage to extend this critical period of our growth? When and where did our ancestors start to stretch out the limits of physiology and build that long childhood? And where can we find evidence of this evolutionary process?

The very surprising answer is: in the mouths of babes—specifically, their teeth. But to understand how the timing of teeth tells us the story of, well, us, we need to first put teeth in context: as important milestones on the path to growth.

Different species grow at different rates. How fast you grow is determined by a complicated set of interlocking mechanisms that factor in everything from the mass of the animal to the stability of their environment and has led to the development of a branch of evolutionary biological theory that attempts to disentangle the factors that propel a species from one developmental milestone to the next: ‘life history.’ Understanding a species’ life history has major implications for biology: comparing the rate of growth between two species, for instance, gives us insight into different evolutionary strategies. For Homo sapiens, who have some of the slowest growth on the planet, looking at life history becomes a critical way to address why our species has moved our milestones so far from those of our nearest relatives.

Teeth are one of the foremost tools in understanding how animals grow because they arrive in the mouth—erupt—at a very predictable time. This regular schedule reflects the critical importance of having the right teeth at the right time. Animals need different sizes and numbers of teeth at different ages. If you think about trying to fit an entire adult set of teeth into the mouth of a baby, you will rapidly understand why it is that humans come with two sets. Of course, having multiple sets is not the only option—some animals have endless sets, like sharks, and some have sets with some teeth that grow continuously, like hamsters. But for primates like us, there are two sets to worry about: our ‘milk’ or ‘baby’ teeth, and our adult, permanent set. The schedule of which teeth emerge when gives us a clear evolutionary signal of which teeth are needed when, and all teeth have very specific jobs to do.

From about five months in utero, human teeth start to develop. We are born with some of our baby teeth already partially formed, but still inside our jaws. The process of ‘teething,’ which parents, in particular, are acutely aware of, is actually a long and drawn-out period over the first few years of life as the deciduous—the formal name for baby teeth, which after all shed just like the leaves of deciduous trees—teeth erupt out of the jaw and into the mouth. First, the incisors in the front, which have the job of nibbling and biting, erupt around 4.5 months; then the lateral incisors to the sides of them around 7.5 months, then the first big bumpy chewing teeth, molars, around 10.5 months followed by the ripping and tearing canine teeth until the last big baby teeth, the chunky second molars, emerge at about 1.5–2.5 years old. That’s it for teeth until about 5 years old, when the very first permanent tooth comes into the mouth: the first adult molar, or, as it is known by biologists: M1.

Molars have been seen as key to explaining the timing of our life histories. Evolutionary biologists looking to explain patterns of growth and development in primates have observed that the timing of the eruption of M1 is linked very well with the end of dependence on mother’s milk, or the end of infancy. The eruption of M2 has been linked with a stage of childhood usually referred to as the juvenile period, and the ability to forage independently in primates like chimpanzees. Finally, the eruption of the third of our big chewing teeth (M3—the wisdom tooth in humans) has been associated with reaching adulthood, the end of growth, and possibly the start of reproduction.

Careful reconstruction of fossil teeth has shown that earlier probable ancestors, like Australopithecus africanus and Homo erectus, erupted their teeth into their mouths much faster than we do today. The timing of the eruption of molars particularly probably was very similar to that of our common ancestor with today’s chimpanzees, and modern-day chimpanzees erupt their first permanent molar (M1) at 2 to 4 years old, M2 at around 6 to 8 years, and finish the last (M3) at 12 years, a few years before they are ready to behave as full adults. Humans, by contrast, have molars that appear around age 5, 10–11, and about 18 years of age.

However, even though we have many similarities with our nearest primate relatives, we have somehow become untethered from the biological milestones that signal different life history events. The eruption of our teeth is not timed quite right for when we wean and move our babies onto solid foods. Even in societies where the pressure to end breastfeeding early does not exist, humans simply do not spend as long as infants on the breast as a primate our size should; we are done before M1 is ready. A large-scale study of forager children around the world found that by 10 years old when M2 isn’t quite erupted, children were still only half as competent at getting food as they would be at 20. Meanwhile, age at eruption at M3 is highly variable and not as clearly linked to reproductive age, with things like adolescent growth spurts confusing the picture.

This suggests that perhaps milestones like the timing of teeth are not all that matters in calculating how we got our unique human life histories. Perhaps our drive to grow long and slow means we have untethered our teeth from the behavioral milestones that our closest relatives still display, or perhaps we have just drawn out the time between these milestones in such a way that it is no longer clear how they fit in the primate pattern.

This does not mean that the timing of our teeth doesn’t matter, however. The factors that push our species out to the extreme ends of life-history schedules are not, of course, ours alone. Our costly investment in big brains has long been theorized to be behind our extensive lifespans, but the same links between long lives and big brains have been seen in many other mammals. As a matter of fact, if you map the size of the brain against the timing of teeth, you get a very neat line right across all of the primates, and we fit that line perfectly.

Our first molars emerge at exactly the right time for a primate trying to build an enormous brain the size of ours. If you consider the function of teeth, it makes sense that the emergence of molars would coincide with important points for growth in our species—points at which we need to be able to take in and process more calories, using our molar teeth. New research shows that learning complex skills such as the foraging skills that humans need to exploit their ecological niche may also be an important part of what humans do with their long childhoods—getting the right mix of nutrients for an energy-burning brain is a complicated business, requiring group communication, the invention of tools, and complex mental processes. Perhaps our teeth are right on track, after all.

By Brenna R. Hassett

Author Bio: Brenna R. Hassett, PhD, is a biological anthropologist and archaeologist at the University of Central Lancashire and a scientific associate at the Natural History Museum, London. In addition to researching the effects of changing human lifestyles on the human skeleton and teeth in the past, she writes for a more general audience about evolution and archaeology, including the Times (UK) top 10 science book of 2016 Built on Bones: 15,000 Years of Urban Life and Death, and her most recent book, Growing Up Human: The Evolution of Childhood. She is also a co-founder of TrowelBlazers, an activist archive celebrating the achievements of women in the “digging” sciences.

Source: Human Bridges

Credit Line: This article was produced by Human Bridges.

At Western Sahara: Visiting A Forgotten People

Western Sahara
Map: en.wikipedia.org

06-17-2024 ~ South of the Algerian town of Tindouf on the border with Western Sahara are five refugee camps. The camps are home to the Sahrawi people of Western Sahara and are administered by their freedom movement Polisario, which is fighting to liberate their homeland from Morocco.

Life in the desert camps leaves a deep impression and testifies to a people who, despite limitations, have managed to build a well-organized society under harsh conditions. “We Sahrawis were originally a nomadic people who used to travel around on camels and settled in different places in and around Western Sahara. There were no borders that limited us from moving into what is today Mauritania or Algeria,” said Jadiya who is a translator.

The colonial era saw European powers come to Africa to take over territories, exploit labor, and extract natural resources. In Western Sahara, the Portuguese and French were first beaten back by the local population before Spain managed to colonize the area in 1884. In 1973, the Polisario freedom movement was established by the indigenous Sahrawi people to liberate their land from the Spanish empire.

Western Sahara remained a Spanish colony until 1975 when the Moroccan government organized a so-called “Green March” with 350,000 protesters marching into Western Sahara to claim the land. The protesters pressured Spain to leave Western Sahara, which Morocco then occupied. Today, Western Sahara is still occupied by Morocco and is thus considered to be Africa’s last colony.

Desert Camps
It is around 35 to 40 degrees Celsius in Wilayah of Bojador, the smallest of the five refugee camps on the border with Western Sahara. My feet are boiling in my shoes, but walking in bare feet is not an option. The sand is far too hot. According to Filipe, a local Sahrawi engineer educated in the Soviet Union, it has been five to six years since it last rained in the camps. “Not a single drop from the sky,” he says.

In the refugee camps, people live either in simple huts with tin roofs or in “getouns,” square tents with entrances on all sides and a large colored carpet as a floor. Skeletons of cars stripped of wheels, doors, windows, seats, and all interior parts remind me of apocalyptic TV shows. Car doors are reused as fencing for the village’s many goats, which are often seen wandering around in herds on the sand hills in the camp. However, the many car frames work well as playgrounds for children who would otherwise not have access to slides, swings, or climbing frames.

The Wall of Shame
Western Sahara is divided into three areas. There is the region of Western Sahara where the occupying power Morocco is in power. There are the liberated areas of Western Sahara, where the freedom movement Polisario is in power. And then there are the refugee camps in Algeria, where Polisario is also in power. To separate the different areas from each other and maintain control over the occupation, the Moroccan monarchy built a 2700-kilometer wall across Western Sahara.

“The Wall of Shame,” as the Sahrawis call it, can easily be compared to Israel’s apartheid wall in Palestine, as both were built by occupying powers and effectively force indigenous families and other communities to live apart from each other.

Although the Wall of Shame is built of sand, “it’s the most dangerous wall in the world,” a Polisario soldier says. The wall is divided into several lines: barbed wire, dogs, a moat, the wall itself, 150,000 soldiers, and eight million landmines. The outermost line is the numerous mines. In addition to making it harder for Polisario soldiers to penetrate, civilian nomads or local cattle are often blown up from stepping on the mines.

A Temporary Situation
As a result of the Moroccan occupation, thousands of Sahrawis fled in the 1970s to the refugee camps in Algeria, whose government allowed Polisario to administer the camps as part of the liberated territories.

The five refugee camps in Algeria are named after towns in Western Sahara. For example, Wilayah of Bojador is named after the city of Bojador, which is in one of the areas ruled by Morocco. “Each camp is named after one of our cities to signal that the camps are temporary. It’s to show that we will return to our real cities one day,” says engineer Filipe.

Wilayah of Bojador may be the newest and smallest of the five refugee camps administered by Polisario. But when I stand on the camp’s largest hilltop, I can see houses and tents far out on the horizon. All around the camps is the flag of Western Sahara, which with its black, white, green, and red colors is very similar to the Palestinian flag. The only difference is that the Western Sahara flag has a red crescent and star in the middle. “The black color symbolizes the occupation. Today, the black color is at the top, but when we will achieve our freedom, from that day on, we will fly the black color at the bottom,” says Filipe.

A Well-Organized Society
Despite limited access to resources, the Sahrawis have in many ways managed to build a well-organized society. For example, each camp—which is considered a region—is divided into several small districts. Each district has a small health clinic, and each camp has a regional hospital. In addition, there is an administrative camp where the main hospital is located. “If you are ill, you first visit the health clinic in your district. If they cannot help you, go to the regional hospital. If they cannot help you either, you go to the administrative camp hospital, then to the hospital in the nearby Algerian town of Tindouf, then to the Algerian capital Algiers, and finally to Spain,” says Filipe. “It is very well organized.”

Around the Wilayah of Bojador, there are small shops where you can buy groceries like rice, pasta, potatoes, and canned tuna. In the camp, I encounter everything from a school, kindergarten, women’s association, and a library to a hairdresser, a mechanic, and small stalls selling tobacco or perfume.

A truck travels the narrow, bumpy roads from home to home, filling bags—the size of inflatable trampolines—with water so families can drink, bathe, and wash their clothes. According to the NGO, The Norwegian Support Committee For Western Sahara, international observers describe the Sahrawi refugee camps as “the best-organized refugee camps in the world.”

A Life Outside the Camp
The Sahrawis and Polisario are doing the best they can to create a dignified life for the people in the refugee camps. But it is not free of challenges. According to Fatima, a member of the Sahrawi Youth Union, one of the biggest challenges today is that there is an older generation that can remember a life before the camps, while a large younger generation has lived their entire lives in the camps.

“To prevent children in the camps from growing up without knowing about life outside the camps, we have set up a scheme where children are sent to Spain to live with a family for a period of time. In this way, they become ambassadors for Western Sahara in Spain, and they see that there is life outside the camps,” says Fatima. When Fatima was six years old, she was part of the program. “I had never in my life seen a fish or seen so many green trees in the same place. I thought it was just something you saw in movies. That it wasn’t real. But in Spain, I learned that it’s real,” she recalls.

There are still problems that Polisario and the local population in the camps struggle to solve. Several young men say that job opportunities vary and that they are often unemployed. Even the men and women employed in hospitals and police stations only receive a salary once every three months, and the pay is not high. Many young unemployed Sahrawis must go abroad to find a job. In the meantime, they volunteer in the camps to carry out various practical tasks.

Refugee camps rely on international donations from bodies like the UN or from other countries. When a bus in Spain is damaged and no longer meets national safety requirements, it can be sent to Western Sahara. Here the buses, which are very similar to Danish city buses, drive around in the sand with passengers. But in many ways, the Sahrawis live a limited life in the camps at Tindouf. During my entire stay, I didn’t see a single trash can. The lack of a waste system means that cigarette packs, plastic bottles, and other rubbish are strewn around the camp.

The power goes out frequently and connecting to the internet is generally a problem. The latter is considered a major problem for the Sahrawis, who want to connect with people in the wider world to bring international attention to their resistance struggle.

Promoting the Cause
The Sahrawis are interested in drawing attention to their cause. In the desert, they have established a museum called the Museum of Resistance, where tourists are taken on a journey from the Sahrawi’s original nomadic life through the colonial period and the Moroccan occupation to Polisario’s fight for liberation. The museum includes a miniature version of the Wall of Shame and several of the tanks and weapons that Polisario soldiers have managed to take from the Moroccan military. In the desert you will also find a media house where journalists sit behind desktop computers, writing articles and updating the Polisario website and social media with news from the camps. There are soundproof rooms, microphones, and soundboards to record radio broadcasts, and studios with green screens and video cameras to record TV news. Polisario has its own TV channel.

In addition, the Sahrawis organize the renowned international film festival FiSahara, which brings people from all over the world. Many of the international guests at the film festival come from Spain. Sahrawi President Brahim Ghali met journalists at the festival. He criticized Spain’s prime minister Pedro Sánchez for changing his country’s position regarding Morocco’s occupation; in 2022, Sánchez wrote to Morocco’s King Mohammed VI to say that he agreed with the view that Western Sahara should be autonomous but under Moroccan rule. “We have frozen our relations with the Spanish government, but we still have good relations with the Spanish people,” said Sahrawi President Ghali.

By Marc B. Sanganee

Author Bio: This article was produced by Globetrotter. Marc B. Sanganee is editor-in-chief of Arbejderen, an online newspaper in Denmark.

Source: Globetrotter

Rebuilding The Left Is Crucial To Stemming The Surge Of Europe’s Far Right

C.J. Polychroniou

06-12-2024 ~ Rebuilding the Left Is Crucial to Stemming the Surge of Europe’s Far Right.

Just over half of the 373 million citizens eligible to vote across the 27 European Union (EU) countries bothered to cast a ballot during the four-day European Parliament (EP) election that concluded on June 9. Germany — the EU’s most populous country, and its political and economic powerhouse but with its days as an industrial superpower rapidly coming to an end — saw a record-high voter turnout, with close to 65 percent of eligible voters turning out for the EP elections. Apparently, German citizens may have felt that the “business as usual” approach both in Brussels and inside their own country could no longer go on, which is why they dealt a humiliating defeat to the coalition government of Chancellor Olaf Scholtz. His ruling party, the Social Democrats, recorded their worst-ever result in EP elections, obtaining less than 14 percent of the vote. The conservatives — whose policies on immigration and climate have shifted close to the position of the far right on these issues — came in first, with over 30 percent of the vote, while the far right Alternative for Germany (AfD) came second by pulling 16.5 percent of the vote, up from 11 percent in 2019. The Green Party’s vote dropped by 8.5 percentage points, from 20.5 percent to 12 percent.

Undoubtedly, in Germany’s political and cultural environment today, anti-immigrant and anti-climate policies, and overall support for hardcore conservative and far right outlooks won.

By contrast, Greece, one of the EU’s peripheral countries, saw an explosive and unprecedented (by Greek standards) abstention rate, estimated to be around 60 percent. The ruling conservative party of New Democracy won the elections with 28.31 percent of the vote, taking seven seats, but suffered heavy losses (more than 1 million votes) from the last general elections. The once-radical Syriza party came second with 14.92 percent and four seats, while the social democratic PASOK came third with 12.79 percent and three seats. Greek Solution — an ultra-nationalist far right party which was created in the aftermath of Golden Dawn’s demise following its conviction for operating a criminal enterprise — came fourth with 9.3 percent and two seats, while another far right party, Niki, won 4.37 percent and one seat. The Communist Party received 9.25 percent of the vote and two seats.

The record-low voter turnout in Greece is an indication that its own citizens must have felt there is very little they can do to change the shape of the EU and therefore the course of their everyday lives, so why bother to take part in a boring political ritual of little practical consequence? Indeed, average Greeks are fighting to make ends meet even if the message spewed on almost every public occasion from the mouth of their right-wing Prime Minister Kyriakos Mitsotakis is that Greece has not only turned the corner but that its economy has become more dynamic than those of Europe’s core economies. The fact is that the Greek economy remains a low-tech and low-added value economy, with government debt to GDP higher than it was prior to the bailouts and the average monthly salary 20 percent lower than 15 years ago while unemployment remains above 10 percent.

Voter turnout in France, the second-largest economy in Europe, did not even reach 50 percent, but the French citizens that decided to cast a ballot for this year’s EP elections expressed a deep desire for change by swinging to the far right and delivering in turn a massive blow to President Emmanuel Macron’s Renaissance party. Marine Le Pen’s National Rally came out on top with 31.5 percent of the vote, more than double the share of Macron’s party. Within an hour following the election results, Macron announced that he would dissolve the lower parliament and hold parliamentary elections at the end of the month. His decision to call a snap election was obviously prompted by the apparently unstoppable momentum of the far right, but Macron said he was “confident in the capacity of the French people to make the right choice for themselves and for future generations.” Marine Le Pen welcomed the challenge as she now feels that the time has come for her party to finally take power. And the first opinion poll suggests that the National Rally could win the snap election, but fall short of an absolute majority.

Like the AfD, the far right National Rally is anti-immigrant, in love with fossil fuels and pro-Israel. Incidentally, the European far right has thrown its weight behind Israel, exchanging its historical commitment to antisemitism for anti-Muslim hatred.

The far right also made major gains in Austria and the Netherlands. Austria’s far right Freedom Party came in first with 27 percent in the EP elections, and Geert Wilders’s own Freedom Party in the Netherlands secured seven seats in the new European Parliament, from zero seats in the 2019 EP elections.

In Italy, less than half of the electorate voted, and Giorgia Meloni’s far right Brothers of Italy won the most votes, with 28.8 percent, which is greater than it had secured in the 2022 national elections.

Only in some Nordic countries (Sweden, Denmark and Finland) was there a slight retreat of the far right. In that region of Europe, pro-environmental issues remain priorities, though it’s a different matter when it comes to immigration and integration. These countries had different approaches to immigration until a few years ago, but they have now all moved in the same direction, which is to adopt “more restrictive immigration and integration policies,” points out migration specialist Chris Horwood.

The results of the 2024 EP elections confirm what has been rather obvious to careful observers for at least the past decade or so, which is that Europe has been lurching further and further to the right to the point that the far right has now become mainstream and normalized. The left must now ask itself: Why is the far right gaining favor across Europe, and can it be contained?

The first and most important reason for the rise of the far right has to do with the very nature and architecture of the European Union. The main purpose of its predecessor, the European Economic Community (EEC), from the late 1970s onwards has been to increase competitiveness in the global market and to boost corporate profits as the era of “managed capitalism” had come to an end. Instead of social protections and growth through fiscal-oriented policies, the new economic approach called for privatization and liberalization of financial markets, price stability and labor market reform. The aim, as revealed by the Single European Act of 1986, was for the EEC to become “a market without a state” and to normalize a noncommitment to welfare. All major decisions were to be made at the top, with powerful countries like Germany almost entirely controlling the economic and social agenda. Subsequently, most member states lost their actual sovereignty as national governments were compelled to follow the commands of the euro masters, and citizens were relegated to a status equal to that enjoyed by the subjects of ancient Rome.

It is within the confines of these structural realities in the design of the EU that political extremism starts to take shape. And lest we forget, Europe is the birthplace of fascism. It would not take much for the old demons to reappear.

The EU’s core periphery divide (and the EU has in fact a number of peripheries — the southern European periphery, the Baltic periphery and the Eastern European periphery) is a major factor for the rise of the far right. In Germany, the AfD was founded by Euroskeptics who opposed the euro, further European integration and the periphery’s bailouts during the 2010 eurozone debt crisis. Euroskeptic parties, movements and sentiments also began to spread in other core European countries around the same time, including Austria, Belgium and the Netherlands. In the periphery (Greece, Italy, Spain and Portugal), the far right began to pick up steam during the period of the eurozone’s debt crisis and it is in the context of the collapsing socioeconomic conditions that immigration becomes a catalyst behind its surge.

The growing economic inequalities inside different EU member states also provided fertile ground for the spread of far right ideology as mainstream left-wing parties had fully capitulated themselves to the European project and to neoliberalism. The far right employs an anti-establishment language stolen from the old left and presents itself as the defender of working-class interests. Brussels and the domestic elite become its targets, though the far right has no intention of upsetting the functioning of a capitalist system. It’s a ploy for coming to power. Hence its promises to voters of a return to a golden past, which includes bringing back the social state and getting rid of immigrants and refugees.

The establishment left, in turn, has nothing to offer but shallow talk about containing austerity and envisioning a more social and humane Europe. Not a word about radical social change, national dignity, popular democracy and socialism. The fact that the European left failed (and continues to fail) to see that its co-optation into the European capitalist universe bears primary responsibility for the rise of the far right is a matter that should consume historians for a long time to come.

The most dramatic example of the outright betrayal of the left to its own followers takes place in Greece in 2015, under the government of the Coalition of the Radical Left (Syriza), with the bailout referendum. “I call on you to say a big ‘no’ to ultimatums, ‘no’ to blackmail,” Prime Minister Alexis Tsipras cried. “Turn your back on those who would terrorize you.”

Greek voters of all political orientations (even right-wingers) listened to the fiery pseudo-leftist and delivered a resounding no to EU’s bailout terms. Sixty-one percent of the voters said “no” to a deal that would have imposed even more sadistic austerity measures on a nation whose economy and public health care system had collapsed while the official unemployment had climbed to 28 percent in 2014. The referendum scared the living daylights out of Greece’s masters, including first and foremost German Chancellor Angela Merkel. But within just a few days, Tsipras caved in to the pressures of the euro masters, shredded into pieces the referendum and signed a new bailout agreement.

It will take generations for Greek voters to trust the left again.

Yet, just like in the past, the only way to stem the surge of the far right across Europe is with the presence of a politically active, anti-systemic left. In much of the 20th century, this role was carried out by Communist parties, but most of them have joined the dustbin of history and Marxism has been reduced to an intellectual or academic exercise.

Rebuilding the left, however, is of absolute critical importance for slowing the surge of the far right across Europe. Not for the sake of the EU, but for Europe’s real democracy. To start with, a new narrative is needed to go beyond the virtually exclusive emphasis on austerity as a critique of contemporary capitalism, one that offers a viable alternative for creating just and equitable societies. A conception of a viable socialist political economy needs to be advanced and the transformation of capitalism must be the centerpiece for any socialist program. Movements and parties fighting against inequality and for social justice, peace and the environment should be natural allies in the struggle to create a better world. Of course, in order for this to happen, sectarianism, one of the most prevalent but demoralizing features of left political culture, needs to be overcome. This is a must in the historic mission of the left, which remains a world to win.

Copyright © Truthout. May not be reprinted without permission.

C.J. Polychroniou is a political scientist/political economist, author, and journalist who has taught and worked in numerous universities and research centers in Europe and the United States. Currently, his main research interests are in U.S. politics and the political economy of the United States, European economic integration, globalization, climate change and environmental economics, and the deconstruction of neoliberalism’s politico-economic project. He is a regular contributor to Truthout as well as a member of Truthout’s Public Intellectual Project. He has published scores of books and over 1,000 articles which have appeared in a variety of journals, magazines, newspapers and popular news websites. Many of his publications have been translated into a multitude of different languages, including Arabic, Chinese, Croatian, Dutch, French, German, Greek, Italian, Japanese, Portuguese, Russian, Spanish and Turkish. His latest books are Optimism Over DespairNoam Chomsky On Capitalism, Empire, and Social Change (2017); Climate Crisis and the Global Green New DealThe Political Economy of Saving the Planet (with Noam Chomsky and Robert Pollin as primary authors, 2020); The PrecipiceNeoliberalism, the Pandemic, and the Urgent Need for Radical Change (an anthology of interviews with Noam Chomsky, 2021); and Economics and the LeftInterviews with Progressive Economists (2021).




Roman Oligarchs Avoided Tax Liability And Restrictions On Land Size

Ager publicus – Ills.: de.wikipedia.org

06-10-2024 ~ The oligarchic tradition of land-grabbing and tax dodging goes back centuries.

Roman land tenure was based increasingly on the appropriation of conquered territory, which was declared public land, the ager publicus populi. The normal practice was to settle war veterans on it, but the wealthiest and most aggressive families grabbed such land for themselves in violation of early law.

Cassius’ Indecent Proposal
The die was cast in 486 BC. After Rome defeated the neighboring Hernici, a Latin tribe, and took two-thirds of their land, the consul Spurius Cassius proposed Rome’s first agrarian law. It called for giving half the conquered territory back to the Latins and half to needy Romans, who were also to receive public land that patricians had occupied[1]. But the patricians accused Cassius of “building up a power dangerous to liberty” by seeking popular support and “endangering the security” of their land appropriation. After his annual term was over he was charged with treason and killed. His house was burned to the ground to eradicate memory of his land proposal (Livy, History of Rome 2.41).

Patricians Versus Plebs
The fight over whether patricians or the needy poor plebians would be the main recipients of public land dragged on for 12 years. In 474 the commoners’ tribune, Gnaeus Genucius, sought to bring the previous year’s consuls to trial for delaying the redistribution proposed by Cassius (Livy 2.54 and Dionysius 9.37-38). He was blocked by that year’s two consuls, Lucius Furius and Gaius Manlius, who said that decrees of the Senate were not permanent law, “but measures designed to meet temporary needs and having validity for one year only.” The Senate could renege on any decree that had been passed.

A century later, in 384, M. Manlius Capitolinus, a former consul (in 392) was murdered for defending debtors by trying to use tribute from the Gauls and to sell public land to redeem plebian debts, and for accusing senators of embezzlement and urging them to use their takings to redeem debtors. It took a generation of turmoil and poverty for Rome to resolve matters. In 367 the Licinio-Sextian law limited personal landholdings to 500 iugera (125 hectares, under half a square mile; see Livy 6.35-36). Indebted landholders were permitted to deduct interest payments from the principal and pay off the balance over three years instead of all at once.

Gifts of Land
Most wealth throughout history has been obtained from the public domain, and that is how Rome’s latifundia were created. The most fateful early land grab occurred after Carthage was defeated in 204. Two years earlier, when Rome’s life-and-death struggle with Hannibal had depleted its treasury, the Senate had asked families to voluntarily contribute their jewelry or other precious belongings to help the war effort. Their gold and silver were melted down in the temple of Juno Moneta to strike the coins used to hire mercenaries.

Upon the return to peace, the aristocrats depicted these contributions as having been loans, and convinced the Senate to pay their claims in three installments. The first was paid in 204, and a second in 202. As the third and final installment was coming due in 200, the former contributors pointed out that Rome needed to keep its money to continue fighting abroad, but had much public land available. In lieu of cash payment they asked the Senate to offer them land located within fifty miles of Rome, and to tax it at only a nominal rate. A precedent for such privatization had been set in 205 when Rome sold valuable land in the Campania to provide Scipio with money to invade Africa.

The recipients were promised that “when the people should become able to pay, if anyone chose to have his money rather than the land, he might restore the land to the state.” Nobody did, of course. “The private creditors accepted the terms with joy; and that land was called Trientabulum because it was given in lieu of the third part of their money” (Livy 28.46).

Latifundia Changed Rome’s Economy Forever
Arnold Toynbee[2] describes this giveaway of Rome’s ager publicus as the turning point polarizing its economy by deciding, “at one stroke, the economic and social future of the Central Italian lowlands.” Most of this land ended up as latifundia cultivated by slaves captured in the wars against Carthage and Macedonia and imported en masse after 198. This turned the region into “predominantly a country of underpopulated slave-plantations” as the formerly free population was driven off the land into overpopulated industrial towns. In 194 and again in 177 the Senate organized a program of colonization that sent about 100,000 peasants, women, and children from central Italy to more than twenty colonies, mainly in the far south and north of Italy. Some settlers lost their Roman citizenship, and they must have remained quite poor as the average land allotment was small.

The Gracchi and Civil War
In 133, Tiberius Gracchus advocated distributing ager publicus to the poor, pointing out that this would “increase the number of property holders liable to serve in the army.” He was killed by angry senators who wanted the public land for themselves. Nonetheless, a land commission was established in Italy in 128, “and apparently succeeded in distributing land to several thousand citizens” in a few colonies, but not any land taken from Rome’s own wealthy elite. The commission was abolished around 119 after Tiberius’s brother Gaius Gracchus was killed.[3]

Appian (Civil Wars 1.1.7) describes the ensuing century of civil war as being fought over the land and debt crisis.

“For the rich, getting possession of the greater part of the undistributed lands, and being emboldened by the lapse of time to believe that they would never be dispossessed, absorbing any adjacent strips and their poor neighbors’ allotments, partly by purchase under persuasion and partly by force, came to cultivate vast tracts instead of single estates, using slaves as laborers and herdsmen, lest free laborers should be drawn from agriculture into the army. At the same time the ownership of slaves brought them great gain from the multitude of their progeny, who increased because they were exempt from military service. Thus certain powerful men became extremely rich and the race of slaves multiplied throughout the country, while the Italian people dwindled in number and strength, being oppressed by penury, taxes and military service.”

How Land Changed Rome’s Army
Dispossession of free labor from the land transformed the character of Rome’s army. Starting with Marius, landless soldiers became soldati, living on their pay and seeking the highest booty, loyal to the generals in charge of paying them. Command of an army brought economic and political power. When Sulla brought his troops back to Italy from Asia Minor in 82 and proclaimed himself Dictator, he tore down the walls of towns that had opposed him, and kept them in check by resettling 23 legions (some 80,000 to 100,000 men) in colonies on land confiscated from local populations in Italy.

Sulla Steals Estates and Sells Them for Support
Sulla drew up proscription lists of enemies who could be killed with impunity, with their estates seized as booty. Their names were publicly posted throughout Italy in June 81 BC, headed by the consuls for the years 83 and 82, and about 1,600 equites (wealthy publican investors). Thousands of names followed. Anyone on these lists could be killed at will, with the executioner receiving a portion of the dead man’s estate. The remainder was sold at public auctions, the proceeds being used to rebuild the depleted treasury. Most land was sold cheaply, giving opportunists a motive to kill not only those named by Sulla, but also their personal enemies, to acquire their estates. A major buyer of confiscated real estate was Crassus, who became one of the richest Romans through Sulla’s proscriptions.

By giving his war veterans homesteads and funds from the proscriptions, Sulla won their support as a virtual army in reserve, along with their backing for his new oligarchic constitution. But they were not farmers, and ran into debt, in danger of losing their land. For his more aristocratic supporters, Sulla distributed the estates of his opponents from the Italian upper classes, especially in Campania, Etruria, and Umbria.

Battle of Generals
Caesar likewise promised to settle his army on land of their own. They followed him to Rome and enabled him to become Dictator in 49. After he was killed in 44, Brutus and Cassius vied with Octavian (later Augustus), each promising their armies land and booty. As Appian (Civil Wars 5.2.12-13) summarized: “The chiefs depended on the soldiers for the continuance of their government, while, for the possession of what they had received, the soldiers depend on the permanence of the government of those who had given it. Believing that they could not keep a firm hold unless the givers had a strong government, they fought for them, from necessity, with good-will.” After defeating the armies of Brutus, Cassius, and Mark Antony, Octavian gave his indigent soldiers “land, the cities, the money, and the houses, and as the object of denunciation on the part of the despoiled, and as one who bore this contumely for the army’s sake.”

Imperial Estates
The concentration of land ownership intensified under the Empire. Brown[4] notes that by the time Christianity became the Roman state religion, North Africa had become the main source of Roman wealth, based on “the massive landholdings of the emperor and of the nobility of Rome.” Its overseers kept the region’s inhabitants “underdeveloped by Roman standards. Their villages were denied any form of corporate existence and were frequently named after the estates on which the villagers worked, held to the land by various forms of bonded labor.”

A Christian from Gaul named Salvian[5] described the poverty and insecurity confronting most of the population ca. 440:

“Faced by the weight of taxes, poor farmers found that they did not have the means to emigrate to the barbarians. Instead, they did what little they could do: they handed themselves over to the rich as clients in return for protection. The rich took over title to their lands under the pretext of saving the farmers from the land tax. The patron registered the farmer’s land on the tax rolls under his (the patron’s) own name. Within a few years, the poor farmers found themselves without land, although they were still hounded for personal taxes. Such patronage by the great, so Salvian claimed, turned free men into slaves as surely as the magic of Circe had turned humans into pigs.”

Church Estates
Church estates became islands in this sea of poverty. As deathbed confessions and donations of property to the Church became increasingly popular among wealthy Christians, the Church came to accept existing creditor and debtor relationships, land ownership, hereditary wealth, and the political status quo. What mattered to the Church was how the ruling elites used their wealth, regardless of how they obtained it as long as it was destined for the Church, whose priests were the paradigmatic “poor” deserving of aid and charity.

The Church sought to absorb local oligarchies into its leadership, along with their wealth. Testamentary disposition undercut local fiscal balance. Land given to the Church was tax-exempt, obliging communities to raise taxes on their secular property in order to maintain their flow of public revenue (many heirs found themselves disinherited by such bequests, leading to a flourishing legal practice of contesting deathbed wills). The Church became the major corporate body, a sector alongside the state. Its critique of personal wealth focused on personal egotism and self-indulgence, nothing like the socialist idea of public ownership of land, monopolies, and banking. In fact, the Crusades led the Church to sponsor Christendom’s major secular bankers to finance its wars against the Holy Roman Emperors, Moslems, and Byzantine Sicily.

[1] Roman Antiquities by Dionysius of Halicarnassus, 8.77.2.
[2] Hannibal’s Legacy by Arnold Toynbee, 1965, II: pp. 250-51 and pp. 341-373.
[3] Conquerors and Slaves by Keith Hopkins, 1978, pp. 61-63.
[4] Through the Eye of a Needle: Wealth, the Fall of Rome, and the Making of Christianity in the West, 350-550 AD by Peter Brown, 2012, pp. 330, 366, and 327.
[5] De gubernatione Dei (“The Government of God”) 5.9.45, paraphrased and discussed in Through the Eye of a Needle: Wealth, the Fall of Rome, and the Making of Christianity in the West, 350-550 AD by Peter Brown, 2012, pp. 433-450.

By Michael Hudson

Author Bio: Michael Hudson is an American economist, a professor of economics at the University of Missouri–Kansas City, and a researcher at the Levy Economics Institute at Bard College. He is a former Wall Street analyst, political consultant, commentator, and journalist. You can read more of Hudson’s economic history on the Observatory.

Source: Human Bridges

Credit Line: This article was produced by Human Bridges.