Verba manent, scripta Volant #1: Nuclear semiotics: the preservation of information for ever.

What has changed with digitalization?

There are just three ontological (i.e. necessary, unavoidable) changes, affecting how we interact with data/information generated/managed through computers, in comparison to how we interact with analogic (i.e. paper based) registrations/inscriptions/documents.

The first two differences are immanent, even with stand alone computers.

The third difference is a consequence of computer networking on closed Local Area Networks (LAN).

A fourth difference is often mentioned, but at more close inspection, it is not an inherent difference between analogic and digital data/information; it is the consequence of (bad) design of IT systems.

The function of documents and of signatures has remained unchanged, if compared to their inherent nature (ontology) in the last 2000 years, even after digitalization.

 Digitalization of data/information changed the following two fundamental aspects of how information is generated, accessed and processed:

Change #1 (ontological): digital data, information, documents were not perceivable/understandable by the humans, without the use of some tools (displays, speakers or printers). The relation between the human person and the information/document became intermediate

Change #2 (ontological): the creation/elaboration of the data/information became much more complex and sophisticated. Data crunching, word processors, made possible for a single user (since the first day of personal computing) to create documents of such a great complexity, that before digitalization, entire teams of dedicated people were needed in order to create them.

Computers interconnected on a Local Area Network (LAN) introduced a third ontological change in the way documentation was generated and shared:

Change #3 (ontological): already with LANs (Local Area Networks) the physical location of the machine keeping the data/information, became irrelevant as long as the IT system keeping such data/information would allow to access them (only) by the person entitled to do so (1).

These are the true ontological changes, that affected digital documents (and digital registrations and digital inscriptions).

There is a fourth change, according to many:

“Change #4”. Many (if not all) users of an IT system had unrestricted access to the same digital data/information, without any possibility for the other users to be aware of it.  A paper document can be accessed only by those who can access the room/drawer/file, where the paper is archived.

Proper design of IT systems managing digital data/information could have easily been designed from scratch in a way to guarantee that
a) access is restricted
b) access (when permitted) is traceable
c) change is restricted
d) change (when permitted) is traceable
exactly as it was (and is) customary with paper based documents. Proper design of the IT system would have made digital data/information more secure and even more difficult to modify, then paper based documents and with a complete audit track of the changes. So in my opinion, there is no “Change #4”, there is “Bad Design #1”: the digital home of digital data had no doors, no locks, everyone had access to the system and was entitled to change its data/information (i.e. recordings, inscriptions, and documents).

The true inherent changes are all related to Change#1, Change#2 and Change#3 and thereof combinations. The modifiability of digital data/information is not inherent to IT systems, it is a consequence of given (bad) design.

Furthermore, IT experts believe in some kind of commonsense about modifiability of data/information, that is plainly wrong, according to semiotics.

The lifecycle of data/information is essential, in order to define its semiotic characteristics:

a)     If there is the need to make available an information for 10.000 years or more, an inscription on some support will not work:
a1. because the sign may fade to the point of being unrecognizable;
a2. because the language will have changed so much, that nobody is able to understand it anymore (languages change to the point to become impossible to understand in a time span between 200 and 800 years)
a3. because the support of the sign may have decayed.
So, according to the Human Interference Task Force, only a properly designed activity ongoing for 10.000 years, can convey the information, not just a sign on a medium. http://en.wikipedia.org/wiki/Nuclear_semiotics.
One of the solutions proposed in order to make information available over 10.000 years is the creation of an “Atomic Priesthood” http://www.semiotik.tu-berlin.de/menue/zeitschrift_fuer_semiotik/zs-hefte/bd_6_hft_3/
In the end, only oral tradition, can bridge an information beyond a millennium. This is why for each religion, it is important to have rites through which to pass its timeless truth (for Christianity, it is the Sunday mass). For information that have to last for millennia, the sign and the medium on which the sign is engraved, have to change necessarily and evolve, with the evolution of the human society!

b)     If we need to provide information for a time span that is within the combined duration of the language and of the medium used, then a static “document” (i.e. registration, inscription or document, in their ontological meaning) can fulfill the task (see my previous post http://www.digitalagreement.eu/2012/12/10/traces-recordings-inscriptions-and-documents/).  In this case, the meaning of “static” and “modifiable” are relative to the purpose of the “document” (i.e. document, inscription or recording) (2). Legislation may define the minimum formal requirements of legal documents, depending on their use and relevance: there are requirements on how to write down legislation (parchment is required by some constitutions), on the formal requirements of a contract in written form (all civil law system provide such a definition) or of a handwritten will (most civil law systems provide a definition of holograph testament); there are moreover, requirements on the formal requirements of banknotes and formal identification documents, or passports, or other documents representing equity or obligations of public/private companies, and so on. Proof by witnesses, presumptions or formal documentation are needed in order to provide the full proof of a legal act or of a legal fact, depending on the legal system. The strongest requirements are on a limited set of contracts:
b1. for some it is not only necessary to have a document, but also witnesses attesting the origin and the authenticity of a document, with an interesting mix between written and oral documentation: e.g. marriage;
b2. for some others it is necessary that they are filed and archived at the competent authority (registrar of companies, or secretary of state): e.g. limited liability companies; real estate property.

Nevertheless, no legislation makes the validity of documents, depending on their inherent modifiability. What counts, is that in the few cases where a specific form of the contract is required by the law, the form of documents is compliant to such law.  So it shall be for digital documents.

What has not changed with digitalization?

The function of a document, has not changed. Commonplace is to state that digital documents are not inherently static, like paper based documents, implying that they may not be real documents .

Now, the truth is, that all documents become unreadable after more then 800 years, either because the traces in the support fade away or because the support decays. So what is “static”? A written leaf of paper left in the rain becomes unreadable after few minutes. It is (was) possible to forge passports and even banknotes, so you can figure out how easy it may be to forge a contract written with ink on a normal DIN A4 page!

And if you would keep an archive of chronologically organized papers in a building, whose walls are made of mud, with no doors, no locks, no heating, no ventilation and no maintenance, also such documents would mold, stick together and become useless after just a few years (or just one bad winter, or one rain season).

So the concept of “static document” is very relative and poorly defined, as a small talk commonplaces.

A “static” piece of paper implies quite a lot of organization and technology, that we simply take for granted. But if we would live in an igloo in Iceland or in a hut in the Amazons, such organization and technology would be very hard to get by.

Also digital files, could be protected by default and by design:

a)     of the hardware used to store and manage such files and/or

b)     of the operating system and/or

c)     of access management software and/or

d)     by a combination of the above mentioned technologies.

So the only novelty about digital documents, it is:

a)     that humans are able to perceive them only through some tools (computers) that elaborate them logically (and not analogically, like with magnifying glasses, microscopes or symmetric code-decryption) and

b)     that they are extremely more complex than written paper.

Before ubiquitous computing, even digital documents, lived within the borders of a given single piece of hardware or of a single LAN, so that their origin and their context was defined. This is not true anymore, now that billions of data/information/documents are exchanged every day over open networks.

So, it may be that digital data/information/documents (i.e. recordings, inscriptions, documents) get modified involuntarily or without that the author(s) may be aware of it. But this was true also of all archived documents particularly when the archivist was a different person than its author.

This hypothetical difference between analogic and digital data/information/documents (i.e. digital recordings, inscriptions and documents), in the end, comes down to how a specific digital or analogical document has been archived and preserved, and is not an ontological difference.

The current common sense, that digital documents are inherently dynamic and easily forgeable, if compared to paper based documents, is not an absolute truth. It takes for granted many aspects of the current design of information technology, that may change radically, even in the near future.

The function of the signature, has not changed. It is true, handwritten signatures cannot be affixed to any digital data/information/document (i.e. registration/inscription/document) in order to validate it. But this is hardly new! Humanity is back to square one: digital documents may be marked with some elaborated sign, that has no biometric link to the author of the document (electronic signature), exactly as it was customary from the second Century B.C. to the mid of the nineteen Century A.D.: a time when “scribes”, drafted and signed documents on behalf of the author(s). It was the time before graphology had been invented as a (disputed) pseudoscience by Abbé Michon, who in 1830 became interested in handwriting analysis. He published his findings shortly after founding Société Graphologique in 1871.

For about two millennia, signatures were routinely affixed (in an intermediated way) by slaves or employees of the signer. No more and no less, like when we now add at the end of an email our name and surname, typing them on the keyboard. As with handwritten signatures apposed by the scribes on behalf of the signer, validation of digital registrations/inscriptions/documents, became a highly complex process, in which social engineering and technology ensure the authenticity and the origin authentication of the digital data.

The social engineering part in Rome was the Forum were it was compulsory to close agreements and where it was compulsory to open the seals, sealing the contracts in order to keep their validity and forcefulness. In the Middle Age the Forum was replaced by the Notary’s office.  Today social engineering is provided by contractual law, legal compliance and accountancy rules.

In ancient Rome the technologic part was represented by the sealed wooden tables that were conserved by the parties of the agreement (very similarly to a digital document with an electronic signature!). In the Middle Age the technologic part was the professional organization of archives (with encryption/decryption facilities) by the Notaries and the Clerks of the government. Today the technologies are ERPs, document management systems and PKIs.

From a global and historic perspective, for only less the 150 years over two millennia, there has been the reliance on the ability to verify a handwritten signature, through a graphologist: this is the reason why the presence of witnesses has remained customary in private negotiations and still is mandatory (in some legal systems) for the most relevant notarial contracts.

The biometric handwritten signature (on paper), had the same lifespan of the tail-coat with the white bow tie!

Footnotes:

(1) With stand alone personal computers or mainframes, data/information where located in the same room where the hardware was also located; with LANs the information was located in the server/mainframe of the LAN, normally in the same building; later on, when the cost of geographic networks became affordable (i.e. the cost of data transmission was independent from the distance to be covered by the network, and that was, again, around year 2000), it was located somewhere in the same region/nation. From that moment on, it became possible to distribute data/information over a set of different locations.

(2) There is  no inherent lifespan to a given document. A train ticket is relevant and useful for the duration od the trip. But if we want to tax-deduct the cost of that trip, then the ticket is relevant for the time that the tax authority is entitled to verify it. If audit requires to preserve the document for a longer time, then its fiscal relevance, then from an auditing point of view there will be a third lifespan of such ticket.

 

 

Pubblicato il digital agreement, digital documentation, Senza categoria | Taggato come , , , , , , , , , , , , , | Lascia un commento

Traces, Recordings, Inscriptions and Documents

The best ontological definition of document, I was able to find is in “Documentality: Why It Is Necessary to Leave Traces (Commonalities)“, by Maurizio Ferraris, http://www.amazon.com/Documentality-Necessary-Leave-Traces-Commonalities/dp/0823249697 that is the translation of “Documentalità: perché é necessario lasciar tracce” by Maurizio Ferraris http://www.amazon.it/Documentalit%C3%A0-Perch%C3%A9-necessario-lasciar-tracce/dp/8842091065/ref=sr_1_4?s=books&ie=UTF8&qid=1355064359&sr=1-4 .

Documents are on the top of a four layered pyramid of signs.
On the left hand we have the escalation of objects towards documents and on the right hand we have the escalation of signs towards the documents. At the apex documents and objects merge: a banknote, a passport, are at the same time objects and documents. Marcel Duchamps “Still life with chair” is an object and a document: http://www.verkstad.com/images/art/duchamp/bicycle.jpg

Documental Pyramid Inscriptions registrations traces
Four layered Documental Pyramid (Copyright by Maurizio Ferraris 2012)

 

At the bottom of the documental pyramid we have the trace. It is a sign that has been generated by events, with no meaning, intention or whatsoever purpose. In IT the only example of trace that comes to my mind is the trace left by deleted data on a memory support.

At the next level we have the recording, where the trace is generated on a medium that is (ontologically) designed for keeping the trace over the time: like the recordings of a camera or a microphone left switched on, or like something that I passively memorize in my brain. Recordings have to be accessible at least by one person. They are not created with a specific meaning or purpose, they simply exist on a medium that is designed with the purpose to record facts. The difference between trace and recording is the functional nature of the medium: you have on a wall the trace of a flooding; on a hydrometer the same trace is a recording. In the digital environment, recordings are all digital data generated by a given IT system, without the aim to be seen by anyone else then the system administrator.

At the next level we have inscriptions, where the recording is ontologically accessible by more then one person: inscriptions are recordings that are made intentionally for the purpose to be read by more then one person. The intention of the author of the inscription is both: aimed at making the inscription and at making it accessible. An inscription has no inherent semantic aim: the author of the inscription does not want anything more, then to leave a sign and make it available to at least one more person. The intention to leave a sign differentiates the inscriptions from the recordings. Inscriptions are in the digital environment all organized data, that are kept together to provide information not only to the system administrator, but also to other authorized persons (like auditors, experts, etc.).

Documents are the inscriptions that the author has generated in order to convey a specific semantic message. If there is the further aim to have a socially recognized effect, we speak of legal documents (or strong documents); if the aim of the document is simply to convey a specific information to at least one additional person (think of an encrypted love message), with no whatsoever socially accepted consequence, we speak simply of documents (or weak documents). The difference between strong and weak documents are of huge social importance: in fact strong documents may have to be generated in a formal context, may have to use a language that is formally accepted by the social context (in order to avoid ambiguity), may need to have an author that is defined or definable, may need to have a recipient that is defined or definable, may need to be fulfill specific formal requirements. But most importantly of all, the meaning of a strong document is strongly determined by the social context, and the individual intention may become just one of the factors that determinate its final meaning.

Pubblicato il digital agreement, digital documentation, digitalisation, ontology, semiotics | Taggato come , , , , , | Lascia un commento

Year 2000, when digitalisation became ubiquituous.

I would make a clear difference of the impact of digitalisation, before and after ubiquitous connectivity (1). In fact in 2000 WiFi, ADSL  UMTS and Application Layer Firewalls (3) started to be commercialized, making within a few years computer interconnection the norm, whilst it was rather the exception before.

Before 2000 computers were interconnected only through Local Area Networks (LANs), whose first commercial application was in December 1977 at Chase Manhattan Bank in New York (2).
Crucially, in 2000 the connection speed to Internet was dramatically increased through ADSL on fixed lines and UMTS in mobile networks, so that it became normal to download at a speed of 8 Mbit/s (through ADSL) and 384 kbit/s (through UMTS). A great leap forward, if compared to the previous maximum download speed of 56 kbit/s.

Internet, i.e. Youtube, Facebook, Pinterest, Tumblr, as we know them today would not be usable with less upload/download speed. Even twitter, that in theory requires no broadband, is so popular because:
a)     of its ability to post images and links (which require broad band)
b)     it is a “no-frills” social network, communicating with messages of just 140 characters (appealing, as opposed to multimedia bulk-message social networks, like youtube or facebook).

The distinction between before and after ubiquitous computing is arbitrary, I know, as it is to date it back to 2000. For the purpose of my reasoning it does not matter if it is more accurate to date back ubiquitous computing to 1996 (a very unlikely date, because WiFi, UMTS and ADSL were not already developed) or 2005. Year 2000, anyway, sounds momentuous, and I will use it therefore.

At that time there were less then 400 million Internet users and about 500 million mobile subscribers. Today we have more then two billion Internet users and more then six billion mobile subscribers http://www.itu.int/ITU-D/ict/statistics/.

 

Footnotes:

(1) Ubiquitous connectivity with upload/download speeds that allow the use of computers and mobile divices as we do today, became a reality since 2000, when Internet had about 400 million users, most of them were accessing it at a speed between 14.4 kbit/s and 56 kbit/s.

Ubiquitous connectivity as we know and use it today, started in 1999-2000 with the ADSL broadband connection: the ITU G.992.1 standard, with download speeds of up to 8 Mbit/s and upload speeds up to 1 Mbit/s: http://www.itu.int/rec/T-REC-G.992.1.

Until 1999 the Internet connections were no faster then 56.6 kbits/s download and 36.6 upload (ITU-T recommendations V.90 and V.92). Also ISDN had a download/upload speed of just 128 kbit/s, for each duplex connection (1988 CCITT red book, then a ITU-T recommendation, now a European Norm, ETSI EN 300485) www.etsi.org/deliver/etsi_en/300400_300499/300485/01.03.01_20/en_300485v010301c.pdf).

For mobile devices also, ubiquitous connectivity became possible since 2000, with UMTS and then HSDPA: UMTS supports maximum theoretical data transfer rates of 42 Mbit/s when HSPA+ is implemented in the network. Users in deployed networks can expect a transfer rate of up to 384 kbit/s for Release ’99 (R99) handsets (the original UMTS release), and 7.2 Mbit/s for HSDPA handsets in the downlink connection. These speeds are significantly faster than the 9.6 kbit/s of a single GSM error-corrected circuit switched data channel (or multiple 9.6 kbit/s channels in HSCSD or 14.4 kbit/s for CDMAOne channels).

Since 2006, UMTS networks in many countries have been, or are in the process of being, upgraded with High Speed Downlink Packet Access (HSDPA), sometimes known as 3.5G. Currently, HSDPA enables downlink transfer speeds of up to 21 Mbit/s. Work is also progressing on improving the uplink transfer speed with the High-Speed Uplink Packet Access (HSUPA). In the longer term, the 3GPP Long Term Evolution (LTE) project plans to move UMTS to 4G speeds of 100 Mbit/s down and 50 Mbit/s up, using a next generation air interface technology based upon orthogonal frequency-division multiplexing: http://en.wikipedia.org/wiki/UMTS http://www.etsi.org/website/technologies/umts.aspx  http://en.wikipedia.org/wiki/Internet_access

(2) http://www.computerworld.com/s/article/9060198/The_LAN_turns_30_but_will_it_reach_40_

LANs were popular in order to share in a local working community the scarce resources of that time: large storage and printers.  The Eternet LAN is today a standard managed by the Institute of Electrical and Electronics Engineers: IEEE 802.3 http://standards.ieee.org/about/get/802/802.3.html

The evolution of LANs, that made LANs ubiquitous, not only in office, but also at home, is WiFi, aka IEEE 802.11 standard.  http://standards.ieee.org/about/get/802/802.11.html

Anyway, still it is the norm that all the users of a LAN have the credentials of an administrator, and that there is normally no way to know when digital data/information are final/stabilized, unless they are somehow externalized (i.e. published/shared/printed).

(3) Eventually, the use of personal firewalls has become normal since we have ubiquitous computing. Their deployment on personal computers dates back to already to 1994, with the definition of Application Layer Firewall, that had a commercial breakthrough since the beginning of 2000. Application Layer Firewall (together with UMTS, ADSL, WiFi) is one of the major technologic breakthrough, that has shaped ubiquitous computing since the beginning of this millennium (together with ADSL, UMTS and WIFI) http://en.wikipedia.org/wiki/Application_layer_firewall#History

 

Pubblicato il digitalisation | Taggato come , , , , , , , | Lascia un commento

Where are we heading to … ? (Part 2)

No Country for Old men (…is anybody in charge here…?)

What are the implications of current social expenditure in OECD countries?

I have to admit, that the issue seems to me so obvious, that I still have the fear that possibly I totally misunderstood the numbers and the trends.

So I hope that among my readers, there will be someone, that will help me to find the (missing) information misleads  me in my assessments.

Today Europe has about 500 million inhabitants. By adding the number of young people (aged 0-19) to the older population (above 65) in 2010, we have a resulting total dependency ratio of 63.2 % .

That means that in EU27 there are 63,2 young-age or old-age dependent persons for each 100 working people: 63,2/100 = 63,2%. In the EU-27 this is equivalent to about three people of working age for every two dependent people.

The number of people younger then 20 provides (but in some reports on young-age dependency, the young-age is defined as above 15 years old) the basis to calculate the young age dependency ratio. Today young-age dependency ratio is 23% and in 2060 it will be 25%. That means that there will be in the coming 50 years four persons in working age (20-65) for each young-age person (0-19). http://epp.eurostat.ec.europa.eu/statistics_explained/index.php/Glossary:Young-age-dependency_ratio

The number of people aged 65 or more provides the basis to calculate the old age dependency ratio: today the old age dependency ratio in the EU27 is 26,2%: there are 26,2 old-aged persons for each 100 persons in working age (as above defined). In 2045 the old-age dependency ratio will be 45,42 and in 2060 it will be 52,55. http://epp.eurostat.ec.europa.eu/statistics_explained/index.php/Glossary:Old-age-dependency_ratio

As you may have seen, confusingly, also old-age is defined inconsistently: in some Eurostat studies old age is assumed to be an age of 60 or above, in others an age 65 or above.

I will try to find out, the reason of such differences, that make almost impossible to come to mathematically exact calculations.  For the moment, we cannot exclude that such different bases of calculation are considered useful in a given report to be presented for political decisions: a “technical” document (taken by non elected public officials) that has the aim to influence a political decision.

Whatever the reason, such differences may account only for a minor change in statistical numbers, and do nota affect the overall trends, that I am trying to highlight here.

The dependency ratios here quoted are represented at page 12 of the “EU Youth Report 2012,” http://ec.europa.eu/youth/documents/national_youth_reports_2012/eu_youth_report_swd_situation_of_young_people.pdf

accompanying the Commission Communication COM(2012) 495 on the implementation of the renewed framework for European cooperation in the youth field (EU Youth Strategy 2010-2018) http://ec.europa.eu/youth/news/20120910_en.htm

It strikes me, that such threatening numbers are used in order to design policies for young-aged or old-aged Europeans, when the true meaning of such numbers is, that the system is imploding, as I will try to show in the following part of this post. If you want to deepen the analysis, here you have some more data about demographic evolution in Europe.

http://epp.eurostat.ec.europa.eu/cache/ITY_OFFPUB/KE-ET-10-001/EN/KE-ET-10-001-EN.PDF

http://epp.eurostat.ec.europa.eu/cache/ITY_OFFPUB/KS-SF-08-072/EN/KS-SF-08-072-EN.PDF

Now, in the next 30 years the percentage of old-aged population in relation to working-age population in EU27 will increase from about 26%, to 45,42% (an increase of almost 60%). That is, in 2045,  the ratio of old-age people vs. working age people in the EU27 will decrease from 3:2 to 2:1. A dramatic shift.

In the coming 50 years, the dependency ratio of the young will only slightly increase, according to the EU Youth Report 2012, where at page 12 you can see the development of old age and young age dependency in the coming 50 years. Young age dependency ratio is today 23% (23% of the total population is aged between 0 and 19 years) and it will increase to 25%. That means that there are 4 persons of working age, sustaining one young person out of working age. ec.europa.eu/youth/documents/national_youth_reports_2012/eu_youth_report_swd_results_of_eu_youth_strategy_2010-2012.pdf

In 2045, abut 30 years from now, we will have a total old-age + young-age dependency ratio of about the 70%: 24% young-age dependency added to about 45% old-age dependency . In 2060 it will be a little less then 80%.  It means that in 2045 in Europe a little more then 50% of the overall population will work in order to maintain the remaining half of the population.

In 2060, there will be 1,2 working-age person for each dependent person.

So far so good, we all probably knew that there was a Demographic pyramid in the EU that is capsizing.

What is important also to know, is that if we compare such trends with Japan USA or Canada or Australia, there are significant differences, but the trend also in these countries/economies is the same. http://www.economist.com/node/13611235

The total old-age + young-age dependency ratio in the US in 2050 may be somewhere in the mid 60% range, but still the trend is the same.

China is a different story, because of its one-child policy, that will bring to a quadrupling of the number of old-age dependent people in the next 40 years. An explosive situation if not dealt with…

We learn from an important OECD paper, that social expenditure is roughly 30% of GDP in the OECD countries in 2009, with minor differences between Europe and USA: http://dx.doi.org/10.1787/220615515052

http://www.oecd-ilibrary.org/social-issues-migration-health/how-expensive-is-the-welfare-state_220615515052.

Now we also know from the OECD 2012 Tax Policy Analysis Revenue Statistics, that in 2011, the average taxation (on personal income, company income, capital gains, social security and consumption taxation) in OECD countries has been of about 33,8% of GDP, with the lowest taxation in Chile (19,6%) and the highest in Denmark (47,6%): http://www.oecd.org/ctp/taxpolicyanalysis/table_a_eng.xls

http://www.oecd.org/ctp/taxpolicyanalysis/revenuestatistics2012edition.htm

In 2011 EU27 taxes on production and imports accounted for 33.3 % and current taxes on income, wealth, etc. 31.2 % of total tax revenue. The share of current taxes on income, wealth, etc. has decreased from 2007 onwards with no recovery in 2010. The share of social contributions increased noticeably from 2008 to 2009 but decreased slightly from 2009 to 2010 to stand at 35.1 % of total tax revenue. http://epp.eurostat.ec.europa.eu/statistics_explained/index.php/Tax_revenue_statistics

Simplifying the EU27 tax revenues can be divided as follows:
-        about 1/3 from VAT and similar taxes, like import tolls
-        about 1/3 from income (personal and enterprise)
-        about 1/3 social security contributions.

All this considered, we understand that social expenditure is the vast majority of public expenditure in OECD. In fact the percentage of social expenditure and the percentage of taxation are almost equal, if put on relation with GDP.

In every country the relation between taxes and social security contribution is designed in a peculiar way, so that we cannot make the statement that almost all taxes and social contributions end into the welfare system.

But looking to single OECD economies, like Germany, Italy, France, USA, we can see that social expenditure is the absolute majority of public spending (i.e. more then 50%) topping in some cases (Germany, Italy, France) more then 60% of total public expenditure.

Germany has a 2011 GDP of 3.479 billion and tax revenues of about 1.250 billion, calculated in US Dollars. Social expenditure in 2009 was about € 745 billion (a little less then 1 trillion USD). http://www.tagesspiegel.de/politik/deutschland/sozialbericht-2009-sozialausgaben-steigen-um-33-milliarden-euro/1558648.html
Four fifths of German public expenditure is social !

In Italy the social state is less generous: only two thirds of public expenditure are devoted to social expenditure. http://www.cliclavoro.gov.it/SondaggiStatistiche/Documents/NotaspesasocialeinItalia.pdf

What do we learn from these statistics?

Well, first that there is no such difference anymore between Europe and USA, when it comes down to social expenditure.

Secondly, we learn that it will be impossible to increase total taxation in order to cover in the next 50 years the dramatic increase of the dependency ratio. Either productivity will double (and with it income and GDP), or there will be the need to increase taxation (for the part not covered by productivity increase).

The OECD 2012 Tax Policy Analysis Revenue Statistics shows also clearly that over the last 50 years, taxation levels have been inversely proportional to GDP growth: in 1965 average OECD taxation was 25,5% of GDP and the growth rate was solidly above 4%. Now average taxation has increased by almost half, and growth in OECD is sluggish or non existent, also because of the accumulated debt in most OECD countries. For more detailed considerations on the optimal range of taxation on GDP, read the OECD paper of November 2011 on the subject: “What is a “Competitive” Tax System? ”  http://www.oecd-ilibrary.org/taxation/what-is-a-competitive-tax-system_5kg3h0vmd4kj-en.

It seems very unlikely, that the demographic pyramid standing on its cusp will be sustainable, at the current pace of social expenditure.   The most likely outcome will be, that the amount of social expenditure will have to be cut drastically, with the social consequences that we see in some EU member states today.

Looking to people rioting in Greece, UK, Spain and Italy, I ask myself: “does really the protesters believe that a social security system can work on borrowed (or non existing) money?”. I don’t think they have such foolish believes.
My impression is that they are furious, because nobody told them before, that promises made in the past decades where hollow at best, and false in any other case.
But looking together at the demographic pyramid and the taxation level, it becomes obvious that also the “virtuous” states in the next 50 years will have to halve the absolute amount of social expenditure, if not even to reduce it by two thirds, in the worst case scenario.  There will be an unsustainable disproportion between what a single OECD citizen will have been asked to pay to the state, and the benefits that he will receive in exchange, some day.
This system has to be changed, because it will certainly produce anger and unrest in the next decades. And could de-legitimize democracy.  Also during the last huge systemic crisis (1929-1932) democracies have been de-legitimized, because they unilaterally changed the underlying social pact, in a way that led the majority to disavow it.

But it will not be easy to change current social expenditure trends: the amount of social expenditure is seen as a “matter of fact” issue, that cannot be politicized: if someone is sick it has to be cured, if someone is jobless or socially weak/excluded it has to be sustained. So a myriad of public and private agencies increase every day the amount of social recipients, making “technical” assessments, that are (until now) beyond the reach of (informed) political decisions.
In Italy in 2008 there has been a 58% increase of “pensioni di accompagnamento”, according to the central book keeping agency (Ragioneria Generale dello Stato). It is not by chance, that such increase coincides with the deepest financial crisis of the last eighty years… http://www.cliclavoro.gov.it/SondaggiStatistiche/Documents/NotaspesasocialeinItalia.pdf
Same thing has happened in the US, with the social expenditure for disability: http://www.economist.com/node/21564418
If on the one hand, during a crisis, social expenditure rises. On the other hand, when the governments try to fix their budgets, rarely they cut expenditure on the socially weak. In 2012 in UK the effort to reduce the public deficit, has materialized in higher taxes and less capital expenditures: http://www.economist.com/blogs/buttonwood/2012/10/fiscal-policy
The consequence of all this, is a ballooning social public expenditure, that makes none of its recipients wealthier (on the contrary, recipients are increasingly socially segregated), but constantly increases the state’s social expenses.
Meanwhile one generation ago, those who benefited of social remittances, had the hope of being readmitted in the productive cycle of the society, and in effect used for a limited time social benefits, today they increasingly remain dependent on them.

On this path Europe (and most OECD economies), before the baby boomers generation dies out, will be “No Country for old Men” (and women, of course)…

And if people becomes disillusioned, angry, suspicious of anything that smacks of morality or social inclusion, well… then nobody will be in charge anymore. Again, we have only to look to Greece today and Argentina (ten years ago), to see that when public debt becomes unbearable, nobody is anymore able to take political decisions.
Already at this very moment, the vast majority in the “virtuous OECD countries” have the unsettling feeling, that the reciprocity of the social pact is biased and slowly unfolding, to their detriment . The system is opaque, so that too many don’t know for sure, if they are  “givers” or  “takers”.
This has to be stopped, before it is too late and decisions are taken by the mob or by “technical” bodies, with a (not so) implicit suspension of democracy and political accountability.

 

 

 

 

 

 

Pubblicato il economy, GDP | Taggato come , , , , , , , , , | Lascia un commento

I’m back!

Dear Friends,

I apologise for having been absent for a few months.

Several things happened since last post on my blog:
- I was submerged by spam (but also by many positive feedbacks);
– I moved from blogspot.com to my own www.digitalagreement.eu domain;
- I have seen the light at the end of the tunnel, and am beginning to be (very cautiously) optimistic about where we are heading to (even if I still believe that there will be a drastic change of economic paradigms, that will cause huge stress and suffering in the more indebted and more rich states, adhering to OECD).

I promise, that I will post regularly now, at least every Monday
before 9.00 am CET (Central European Time).
The next post
“Where are we heading to … ? (Part 2)
No Country for Old men (…is anybody in charge here…?)
will be dedicated to the hidden debt generated by social security expenditure in OECD and its impact on stability and democracy.

 

Pubblicato il Senza categoria | Lascia un commento

Verba manent, scripta Volant #2: What are (digital) documents, in the end? Memories!

Verba manent, scripta Volant #2: What are (digital) documents, in the end? Memories!

Ontological philosophers pose a nice paradox about the real relevance of written documents.

Let’s imagine that a woman convenes with her son at a lawyer’s office in order to execute a donation of a precious painting to him.

And let’s imagine that her brain is not able to retain any memory of the closing of that donation.

So, she will go on assuming that the painting belongs to her, she will report to the authorities the missing painting and the police will eventually find the painting in the home of her son.

If, at this point, the donator is confronted with the written document of the donation, without having any memory of signing it, she will conclude that such a document is a forgery.  Even if she recollects that she actually wanted to donate the painting, still she will be shocked to see that her son assumes to be the rightful owner, if she has no memory of signing the donation.

If the lawyer and the eventual witnesses testify that she has actually signed it, she will be increasingly frustrated and despaired, but not convinced, as long as she cannot recollect the event of convening in the lawyer’s office and signing the donation.

This will be the situation, if just one of the involved parties has no memory of the generation of the document.

Now, what would have happened, if the human brain had been unable to retain any memory of signing a document (like in the case of dreams , of which we are unable to retain the memory)? What if nobody would recollect anything of what happened in the lawyer’s office? Well, then documents would be utterly useless!   They simply would not prove anything! http://www.amazon.com/Documentality-Necessary-Leave-Traces-Commonalities/dp/0823249697

So, if we look at (analogue and digital) documents, we have to be mindful of anything that is not apparent, but still essential in the definition of document: that it is something created intentionally by a human being and therefore it is something that ontologically leaves a specific recording in the brain of the persons involved in the generation of the document (or its exchange/execution).  Without that trace of the generation of the document in the mind of the author (and/or of the relying parties), no document can exist (or, if it existed, like in the donation example, it would be a useless document, which is, almost, a contradiction in terms).

In fact, the (wrongly named) “document”, generated without intention and without consciousness, is a registration: think of a camera inadvertently switched on and left filming, until the battery dies; in this case, maybe, it will never be discovered that there is a registration. In the documental pyramid, that registration is more than a trace, because it has been generated by a tool, designed for that purpose. But it is less than an inscription, because it is not meant to be shared in any way: it is just a registration.

An example of inscription is the registration of the security camera filming the entrance door of a bank. Such a registration is meant to be accessed by the police, in case of a crime, or by the system administrator in order to store it, until the day when it must be deleted, for data protection reasons. Here again, probably, nobody will ever look at the registration. Nonetheless, since it is designed to be accessed by more than one person, it is more than a registration, it is an inscription. Inscriptions are more likely to be transformed into documents through intentional use, subsequent to the generation of the inscriptions.

Before the invention of recording machines in the XIX century, almost all kind of items with hand-made traces on themselves (paintings, drawings, text, etc.) were inscriptions or documents. If they were not documents, it was mostly because there was a precise intention not to share the information recorded on the item (like personal notes and scribbles).

They have the ontological property that they can be physically immediately perceived by the author (and the relying parties);

they have the ontological property that they are visible to humans without the use of any device;

moreover, paper based documents (again, ontologically) can be handled/exchanged without the need of any tool (they are not liquid or gaseous, too hot or too cold, too big, too little, too heavy or too light, to fit such a purpose);

paper based documents exist in the same dimension (the physical –or analogic- dimension) where the author and the relying parties exist themselves.

For all the above mentioned properties, the generation and validation (signature) of a paper based document, is (almost) always a physical, immediate experience of the author (and of the other relying parties): the case of a human being creating involuntarily a handwritten paper based document is just theoretical and scholastic (the drunken writer/signer, the sleepwalker, etc.).

Things have become more complicated lately, due to technological evolution.

First, in the XIV c., the invention of the printing press for reproducing text and images. Since then, in order to reproduce information, we have been able to use machines that record/reproduce images and sounds on some analogue supports. They were, in chronological order:

 

  • XIX c. celluloid for photographed and filmed images,
  • XIX c. shellac records for sound,
  • XX c. magnetic tape for images and sound,
  • XX c. cyclostyle and photocopier for reproducing text and images.

Before such machines were invented[m3] ; but, still, the items created by such machines are just recordings (or inscriptions, if publicly displayed), as long as they are not intentionally apperceived/intended by the author and/or by the relying parties as documents. http://www.riccardogenghini.it/en/2012/12/10/traces-recordings-inscriptions-and-documents/

Before recording technologies were developed, there was no way of generating documents unintentionally!

It was also impossible to distinguish morphologically the original from the copy: both were handmade.

Now, the most common way of generating paper documents is the utilization of a multifunctional recording machine (the computer). So, in handwritten documents, it has become possible to distinguish original and copy, because they are morphologically different. But it has become very difficult, if not impossible, to distinguish the original from the copy in all the other kinds of documents. Moreover, registrations, inscriptions and documents that (at the time of handwritten documents) used to be morphologically distinguishable have become identical. To distinguish between recordings, inscriptions and (true) documents, it is necessary to look into the semantics of the document, and eventually also into its technical structure.

Today’s photocopies are so perfect that it is almost impossible to distinguish the copy from the original.

 

But it is with “digital documents” that we come into a situation where almost all properties of paper documents and other analogue documents are turned on their head:

a) Therefore, we take for granted something that is essential, but not obvious: that the generation of a document also produces some memories of the generation of such document. This is not simply because the document is ontologically intentional, but also because it is physical and it exists in a way that can be apperceived by humans, without the need of any tool (magnifying glasses, microscopes, displays, grips, etc.).

Most “digital documents” would precisely be created (and eventually even signed) unless the “author” and the “signer” may have any perception of the “digital document” being created (and eventually signed). Therefore, the “author” and even the relying parties may not have any recollection of the generation/exchange/execution of the “digital document”.



[1] (1) With stand-alone personal computers or mainframes, data/information were located in the same room where the hardware was also located; with LANs, the information was located in the server/mainframe of the LAN, normally in the same building; later on, when the cost of geographic networks became affordable (i.e. the cost of data transmission was independent from the distance to be covered by the network, and that was, again, around year 2000), it was located somewhere in the same region/nation. From that moment on, it became possible to distribute data/information over a set of different locations.

[2] There is no inherent lifespan to a given document. A train ticket is relevant and useful for the duration of the trip. But if we want to tax-deduct the cost of that trip, then the ticket is relevant for the time that the tax authority is entitled to verify it. If audit requires to preserve the document for a longer time than its fiscal relevance, then from an auditing point of view there will be a third lifespan of such ticket.


 

Pubblicato il digital agreement, digital documentation, digitalisation, semiotics | Taggato come , , , , | Lascia un commento

Where are we heading to …? (Part 1: to be a saint or a fool ?)

I was asked by my children, if the states are borrowing so much more then we (as family) do, that they cannot get credit. Instinctively I would have answered “yes”. But before answering, I put down a few numbers, just to be sure.

Here there are the numbers:
Western Europe has a GDP per head varying between 32.000 and 38.000 US $ at 2009 PPP (source: The World in figures 2012, by the Economist).

The average price paid by a home buyer in Western Europe is between 150.000 and 180.000 US $ 2009 PPP.
Consequently home buyers gets a mortgage, whose principal varies from a minimum of 450% to a maximum of 560% of their yearly income.

So why cannot states borrow money on the markets, if they have a   GDP / Debt   ratio varying between 70% to 200% ?

The answer is easy but not obvious.
Would a bank borrow 150.000 US $ to a family father that plans to spend the money as follows (source Eurostat):
5% capital investment
10% for food and other items
13% for healthcare
15% for other services
23% for house keeping personnel
39% for supporting old family members
5% interest expenditure

NEVER EVER !!!

In fact an average family that asks for a mortgage, spends (data aggregation from Eurostat, 2005)
10% in capital investment (House)
25% in food and other goods (furnishing, wear,
11% in services (communication, utilities, recreation, culture, education, hotels restaurants)
9% in transportation (car and public transportation)
2% healthcare
8% interests
35% taxes (OECD average 2009: but taxation can grow to more then 50% of GDP in some states, like France)

It means, that 68% of the disposable income of a family goes to the purchase of goods. About 33% of purchases of a family are on durable goods (house, car, furniture, white goods, TV, computers, etc.), compared to 5% of the state expenditure.

In 1910 the state was spending 12,5% of GDP and devoting (as a normal family today) about 30% of its expenditure to the purchase of durable goods (infrastructure).  Today the state spends an average of 45,6% of GDP and devotes just 11% to the build up of infrastructure (source Public Spending in the 20th Century). A global perspective, by Vito Tanzi (IMF) & Ludger Schuknecht (ECB), Cambridge University Press 2000.

The state has the expenditure pattern of a tramp, that consumes almost 90% of what it earns.  Tramps do not have access to credit. Eventually poor people have, if there is the concrete perspective of a significant increase in their earnings or if, like in micro finance, credit spoors investment and consequently earnings.

I would say that the difference between a tramp and a poor person, is that the latter is poor but is striving to improve its status, the former is poor (mostly) because of its attitude.

I would say, that it has become obvious in the (bond) markets, that a paradigmatic shift has to happen: in the last 100 years the amount of money collected by the state in absolute and in relative terms, has been increasing relentlessly. This is going to change for three main reasons:

1)     it is difficult to imagine that states may have a share of GDP bigger then 50%, so there is not much room to increase the state’s share of the pie.  In fact it is recommended by several economists, that taxation stays under 35%. So the GDP share of the state will likely to shrink, some day;

2)     productivity increase in OECD countries is slowing down, not only because of conjuncture. So the growth in personal income and in general wealth is slowing down;

3)     the dependability ratio in the OECD will change dramatically in the next 30 years: in 2035 we will have 117,6 dependent persons for 100 working persons. Today it is 79.6 on 100. Than means that there will be an increase of about 50% in the coming 30 years (source OECD population statistics 2006) .  If nothing changes, in 2030 no less then 60% of state expenditure will be on welfare. To cover that, the average taxation/contribution has to exceed 60% by no less the 10%.

So the answer to my children was:

No, the state is borrowing much less then a family with a mortgage. But it is plainly evident that its spending pattern is unsustainable. Less then 10% of the money borrowed goes into increasing its infrastructure (assets), most the rest is passed over to needy people. Who would help the needing, with borrowed money ?

A saint … or a fool.

Pubblicato il economy, GDP | Taggato come , , , , , , , | 2 Commenti

Where do we come from … ?

It is understandable that we consider “normal” what has been the norm for the last 150 years.

But if we look at the same things in a perspective of millennia, then normality may resemble to an exception.

The world Gross Domestic Product (GDP) calculated by Angus Maddison in Keary-Kamis US $ at 1990 Purchasing Power Parity (PPP) was in year 1 AD little about $ 100 million. In 2000 it was more then $ 25.000 million: an increase of 250 times. The increase from 100 million to 600 million took 1820 years of time. The increase from 600 million to 25.000 million took 180 years. The speed of GDP increase in the last 180 years was 400 times faster then in the previous 1800 years (see image 1).

 

 

 

 

 

 

 

 

 

What has changed in the last 180 years ?

One important factor of change was population growth: in the last 180 years it has increased and become ten times faster, compared to the previous 1800 years.
From year 1 AD to 1820 AD, population increased five fold from 165 million to 1000 million.
From 1820 AD to 2000 AD, population increased six fold from 1 to 6 billion (see image 2).

 

 

 

 

 

 

What else changed in the last 180 years in order to explain an increase of more then 40 times of the world GDP ?
The obvious answer, is that the industrial revolution has been transforming the world since 1820.

But the industrial revolution has not spread evenly or synchronously over the globe. Some nations were more eager to embrace it, others could not or didn’t want to. The nations that did not embrace the industrial revolution, generally were not prepared to embrace the social consequences of industrialization, or tried to domesticate its side effects: they erected moral, social and ethical hurdles to industrialization. Such nations never build up a strong industry and, eventually, remained poor.

To build a national industry, there is the need of true industrialists. Bureaucrats, shopkeepers or merchants, will rather prefer to carry on with their usual way of doing business. That is, from their perspective, more effective and even more lucrative: low investment, with the highest possible return, focusing mainly on exploitation of regulation, of informative asymmetries and of scarcity.

A radical change in the use of capital, with the deployment of capital intensive production systems and mass work, requires new social classes with a new mindset. This is what happened most egregiously in the USA during the second half of the XIX century: new rules, a new esthetic and a new ethic were born. The ethics and esthetics of the British gentleman (detached, indifferent, self-sufficient, impractical), was superseded by the ethics and the esthetics of the engaged, professional, effective and essentially squared businessman. A social class with a new mindset was overtaking the existing ruling classes (aristocracy, shopkeepers, artisans and merchants, that have been the backbone of society for seven millennia).

The British Empire (and part of its Commonwealth), started industrialization. But traditional mindset of the British and a conservative social structure of aristocrats and professional military, were quite alien to unrestrained industrialization and eventually contained it. In the USA industrialists were the first local aristocracy, the first ruling class: the United States are a living monument to the triumph of industrialization.

Why did Africa, Asia and Latin America lag behind ? Colonialism certainly held back the industrialization of the colonies in the XIX century. But colonialism alone cannot explain, why industrialization did not happen in four fifths of the globe during the XX century.
In fact many efforts have been made in Asia, Africa, Eastern Europe and Latin America, mostly to no avail.

Industrialization was a success only in open societies. Then more open (to foreigners and to social change) a nation was, the most it benefited of industrialization. It is no coincidence that the United States of America have benefited the most from industrialization. On the contrary, where industrialization was alien to social change and innovation (like in communist or populist or autocratic regimes), it wasted huge investments in infrastructures and industrial assets.

Germany, Japan and, to a lesser extend, France and Italy were the nations that most successfully followed the path of industrialization after the two forerunners (UK and USA). In all these nations the uprising classes expressed a new mindset, new rules, new ethics and new esthetics: libertarianism, utilitarianism, impressionism, realism, existentialism, rationalism, irrationalism, socialism, cubism, nudism, expressionism, …, etc. .

Some explain the social openness of Europe in the last fifteen centuries, with the co-existence of temporal power (the kings) with the spiritual power (the Pope). But the Pope, in the end was just another emperor, at that time: he ruled a nation and assumed to have the moral right to decide who shall rule as a king in other nations (crowning them). Certainly the dualism between religious power and temporal power in Europe is unique, compared to the other great societies (like Arabian, Indian and Chinese).

Personally I believe that the importance of the roman law and its evolution through the class of merchants, should not be underestimated.

Roman law survived the collapse of the Roman Empire for one thousand five hundred years, until 1804, when Napoleon repealed it, with his “Code Civile“. Transforming of the Roman law, the merchants created the Lex Mercatoria (law merchant). For more then six centuries (form the XI century to 1804), the merchant tribunals were parallel institutions, to the state tribunals. Nonetheless their authority was rarely questioned by the citizen and even by absolutist state. Only in 1804 Napoleon transformed the law merchant in a law of the French state and exported it to all European nations that he occupied militarily.

The law merchant was technocratic, evenly and strictly applied all over Europe: look to the court scene in the Merchant of Venice by Shakespeare, written around the year 1599 and describing the workings of the “Supremo Tribunale della Quarantia” in Venice in XVI century). The law merchant protected and enhanced the accumulation of capital, limiting the individual rights of the entrepreneur: the law merchant required merchants to be solvent, better if rich. So, it is no coincidence that many of them were richer that entire states.

It is common sense to say that the success of the western nations, was the success of individualism. There is a great deal of approximation in this assumption. In fact the law merchant was not individualistic at all. The law merchant protected the enterprise (mercatura) and its creditors, at the expense even of the merchant. All partners (even hidden partners) were fully responsible for the all the obligations of the merchant, to which they were associated. A merchant that cooked his books was punished with long prison terms. If he went also bankrupt, then he was put to death. All merchant involved in an enterprise had to be personally and fully liable for all debts of the enterprise. No exceptions. Personal creditors of the merchant had no recourse on the assets of the enterprise, that was exclusively devoted to the relying parties of the enterprise.

These are not individualistic rules. Such rules protected the collectivity of the merchants and all relying parties. They protected the extension of trade, the expansion of credit, the increase of the assets of the enterprise. The property of the merchant was fully pledged to the workings of the merchant’s enterprise.

In comparison to the lex mercatoria, the Arabic law, was much more individualistic: it allowed the merchant to leave at any moment the enterprise and to protect his property from thereto connected risks and liabilities: these are the findings of Timur Kuran (http://econ.duke.edu/people/kuran) in his wonderful “The Long Divergence: How Islamic Law Held Back the Middle Easthttp://press.princeton.edu/titles/9273.html.

The great leap forwards of western Europe in the XIX century, followed a long preparation, during which the European society became more tolerant, open, individualistic, but at the same time more respectful of collective interests and rights that were nurtured and represented by several legal entities: companies, foundations and other non profit organizations and, last but not least, by several different organs and legal entities of the states.

We cannot take for granted in a difficult time, like the present, what happened in the last twenty centuries. We have to understand the reasons that made possible the road to riches of the western world. We have to ask ourselves how Europe, that was dirt poor after the abolition of slavery and the collapse of the slave-based economy, was able to recover and became more prosperous then all other great civilizations of the world. Western Europe, was dirt poor in the fifth century. At the turn of the first millennium AD, Europe had an income per head that was 10% lower then the average income per head of the rest of the world (Asia, Africa, Latin America, little is known of North America and Australia). Today Africa, Asia and Latin America have less the one fifth of the GDP per head of Europe or USA (see image 3), even if in the last 180 years, the GDP per Head has had an increase of three times in Africa, of eight times in Asia and of more then ten times in Latin America and Eastern Europe.

After the long “Dark Age” between the end of the Roman Empire and the year 1000, the West (at that time, “the west” was Western Europe) kept growing, inventing, innovating. But only with industrialization the income per head increased exponentially.

The same is true for the rest of the world. In Latin America and Eastern Europe, we see the GDP per head grow significantly, with the same steep curve upwards of the West, for the first time, after 1870, with industrialization.
The same is true for Asia, since 1950. In Africa until 2000 there was a more modest increase in personal income.

It is obvious that industrialization has solved a structural problem, that needs to be solved in order to become affluent. And the problem that industrialization solves, is the curse of low productivity of labor. How much crop can a peasant produce, scavenging earth with his bare hands? Can he feed a family of ten ? The obvious reply is, that he cannot.

Slavery was the social engineering solution for that problem: in all human societies of the world, we have slavery in the 7th millennium BC: Mesopotamia, Central America and Asia. Civilizations that had no knowledge of the existence of each other.

The most effective way to feed everybody, was that the owner of the land and of the expensive tools that were necessary for cultivation, was also owning the people on it. A radical change, to the structure of the hunters gatherers, that were stalking earth as free and savage people for the previous 50.000 years.

Even western Europe was able to restart its economy, only reintroducing (in the colonies) slavery, although a practice banned by the Christian religion.

Industrialization has proved itself a much better way to increase wealth, then slave economy or mercantilism, because it treated the cause of poverty (scarce productivity) and not the symptom (low income and thus low accumulation rates). All the rules that had been introduced to regulate trade and consumption from the XI century onwards, had their moral and practical justification in scarcity.

Industrialization made abundance possible, and needed therefore global markets, global rules, less barriers.

Few of those that today advocate in favor of trade barriers, are aware that China and India became poor because they closed themselves to foreign trade. Britain and the Netherlands surpassed Spain and France, because they favored trade and industry, not mercantilism.

I have tried to summarize my last ten years of legal and economic studies in less then 2000 words. So I surely have omitted to mention some other important phenomena that shaped history and economy in the last two millennia. I will be glad to anyone that will point out the relevance of some events that I have not mentioned, so that, together, we can draw a more sharp picture of where are we coming from.

To know where we are coming from is of great importance, today, because it shows an evolution. It shows us a (quite linear) path that the West has been following in becoming what it is. It shows us also the direction of our headway: a social direction, a cultural direction, an economic and a political direction.

In one sentence, the evolution of the West is characterized by the ability to evolve socially, to be culturally open and technically innovative and to breed the greatest social change ever (after the end of the tribal structure of society): the rise of industry and of a class of industrialists, that solved the problem of scarcity and of low productivity of labor.

But something has changed, apparently, in the last 10 years: USA is heading towards a technical default on its public debt, if no political compromise on the federal budget can be found before the end of 2012.
Spain asked yesterday for a massive bailout of their banks.
Greece is, technically, in default.
Ireland and Portugal have been bailed out.
Italy and France look quite shaky because of high public debt.
Growth is slowing everywhere, even in the developing countries.

Has the West become poor again in 10 years … ?

Where are we heading to … ?

Pubblicato il economy, GDP | Taggato come , , , , , , , | Lascia un commento

What will come next ?

Christmas 1999 reading the millennium edition I had an Epiphany: in the Article “Road to Riches” some data of Angus Maddison where summarised in a graph, showing the increase of productivity of the West.

When I saw the graph, I became aware that the curve had to flatten. And so I did prepare myself for the eventuality.
Since 2006, the investments started to pay off: meanwhile the others struggled, my business thrived and eventually tripled, in a market that was decreasing if not collapsing, like in real estate (which is more then half of my business).
Fundamentally the article of the Economist had made obvious to me, what still is unacceptable for the large majority of people: that the past century was exceptional and eventually unique in its multiplication of wealth, and knowledge. It is no coincidence, so, that last week, I read an article by Bagehot (yes! Again in the Economist), that was summarising my thoughts: The nightmare scenario.
I am not sure to have a solution.  Possibly there is no solution, other then to reconsider reality starting from new theories (Einstein).
I am not even sure to have understood entirely the situation of the western world, an obsolete definition, because there are many eastern and southern nations that have become rich, democratic and open. A better denomination is “OECD Countries” or simply “OECD”, in order to refer to the 34 nations that are currently members of Organisation of Economic Co-operation and Development):
But I think that there are some people that have started to try to understand the facts underlying our current predicament.   Too many of them are shying away from speaking out.
And there are too many that are shouting and have nothing to say, apart from blaming others for the unsolved problems (the government, the opposition, the bankers, the unions, … the usual suspects!).
This new Blog of mine has the ambition to ask tough questions and to collect the opinions of those who care to discover the path to a sustainable future for us (the OECD people).
I believe we will need some substantial changes in our rules and organisations.
As a lawyer, my ambition is to understand what are the rules that provide justice and peace for the next century.  Good rules last for a long time: the “Ius Commune” of the ancient Romans lasted for about 1500 years after the end of the Roman Empire.
I believe many of our current laws, will not last for a decade.
What will come next ?
Pubblicato il digital agreement, economy, GDP | Taggato come , , , | Lascia un commento

Verba manent, scripta Volant #1: Nuclear semiotics: the preservation of information for ever.

Verba manent, scripta Volant #1: Nuclear semiotics: the preservation of information for ever.

The handwritten signature and the tailcoat. (Posted 2012-12-10)[m1]

There are just three ontological (i.e. necessary, unavoidable) changes affecting how we interact with data/information generated/managed by computers, in comparison to how we interact with analogue (i.e. paper based) registrations/inscriptions/documents.

The first two differences are immanent, even with stand-alone computers.

The third difference is a consequence of computer networking on closed Local Area Networks (LAN).

A fourth difference is often mentioned, but on closer inspection, it is not an inherent [m2] difference between analogue and digital data/information; it is the consequence of (bad) design of IT systems.

The function of documents and of signatures has remained unchanged, if compared to their inherent nature (ontology) in the last 2000 years, even after digitalisation.

 

Digitalisation of data/information has changed the following two fundamental aspects of how information is generated, accessed and processed:

Change #1 (ontological): digital data, information and documents are not perceivable/understandable by humans, without the use of some tools (displays, speakers or printers). The relation between the human person and the information/document has become mediated.

Change #2 (ontological): the creation/elaboration of data/information has become much more complex and sophisticated. Data crunching and word processors made it possible for a single user (since the first day of personal computing) to create documents of such a great complexity that before digitalisation entire teams of dedicated people were needed in order to create them.

Computers interconnected on a Local Area Network (LAN) have introduced a third ontological change in the way documentation is generated and shared:

Change #3 (ontological): already with LANs (Local Area Networks), the physical location of the machine keeping the data/information became irrelevant as long as the IT system keeping such data/information would allow (only) the person entitled to do so to access them[1]..

These are the true ontological changes that have affected digital documents (and digital registrations and digital inscriptions).

There is a fourth change, according to many:

“Change #4”. Many (if not all) users of an IT system have unrestricted access to the same digital data/information, without any possibility for the other users to be aware of it. A paper document can be accessed only by those who can access the room/drawer/file where the paper is archived.

Proper design of IT systems managing digital data/information could have been easily designed from scratch in a way to guarantee that

a) access is restricted
b) access (when permitted) is traceable
c) change is restricted
d) change (when permitted) is traceable
exactly as it was (and is) customary with paper based documents. Proper design of the IT system would have made digital data/information more secure and even more difficult to modify than paper based documents and with a complete audit track of the changes. So, in my opinion, there is no “Change #4”, there is “Bad Design #1”: the digital home of digital data had no doors, no locks, everyone had access to the system and was entitled to change its data/information (i.e. recordings, inscriptions, and documents).

The true inherent changes are all related to Change#1, Change#2 and Change#3 and thereof combinations.

The modifiability of digital data/information is not inherent to IT systems, it is a consequence of given (bad) design.

Furthermore, IT experts believe in some kind of common sense about modifiability of data/information that is plainly wrong, according to semiotics.

The lifecycle of data/information is essential in order to define its semiotic characteristics:

If there is the need to make information available for 10.000 years or more, an inscription on some support will not work:
a1. because the sign may fade to the point of being unrecognizable;
a2. because the language will have changed so much that nobody will be able to understand it anymore (languages change to the point of becoming impossible to understand in a time span between 200 and 800 years)
a3. because the support of the sign may have decayed.
So, according to the Human Interference Task Force, only a properly designed activity that has continued for 10.000 years can convey information, not just a sign on a medium. http://en.wikipedia.org/wiki/Nuclear_semiotics.
One of the solutions proposed in order to make information available for over 10.000 years is the creation of an “Atomic Priesthood” http://www.semiotik.tu-berlin.de/menue/zeitschrift_fuer_semiotik/zs-hefte/bd_6_hft_3/
In the end, only oral tradition can bridge information beyond a millennium. This is why for each religion it is important to have rites through which to pass its timeless truth (for Christianity, it is the Sunday mass). For information that have to last for millennia, the sign and the medium on which the sign is engraved have to change necessarily and evolve with the evolution of the human society!

If we need to provide information for a time span that is within the combined duration of the language and of the medium used, then a static “document” (i.e. registration, inscription or document, in their ontological meaning) can fulfil the task (see my previous post http://www.digitalagreement.eu/2012/12/10/traces-recordings-inscriptions-and-documents/).  In this case, the meaning of “static” and “modifiable” are relative to the purpose of the “document” (i.e. document, inscription or recording) (2)[2]. Legislation may define the minimum formal requirements of legal documents, depending on their use and relevance: there are requirements on how to write down legislation (parchment is required by some constitutions), on the formal requirements of a contract in written form (all civil law systems provide such a definition) or of a handwritten will (most civil law systems provide a definition of holograph testament); moreover, there are requirements on the formal requirements of banknotes and formal identification documents, passports, or other documents representing equity or obligations of public/private companies, and so on. Proof by witnesses, presumptions or formal documentation are needed in order to provide the full proof of a legal act or of a legal fact, depending on the legal system. The strongest requirements are on a limited set of contracts:

b1. for some of them, it is necessary to have not only a document, but also witnesses attesting the origin and the authenticity of a document, with an interesting mix of written and oral documentation: e.g. marriage;
b2. for some others, it is necessary that they are filed and archived by the competent authority (registrar of companies, or secretary of state): e.g. limited liability companies; real estate property.

Nevertheless, no legislation makes the validity of documents, depending on their inherent modifiability. What counts is that in the few cases where a specific form of the contract is required by the law, the form of documents is compliant to such law.  So it shall be for digital documents.

 

What has not changed with digitalisation?

The function of a document has not changed. Commonplace is to state that digital documents are not inherently static, like paper based documents, implying that they may not be real documents.

Now, the truth is that all documents become unreadable after more than 800 years, either because the traces in the support fade away or because the support decays. So what is “static”? A written leaf of paper left in the rain becomes unreadable after few minutes. It is (was) possible to forge passports and even banknotes in order to figure out how easy it may be to forge a contract written in ink on a normal DIN A4 page!

And if you keep an archive of chronologically organised papers in a building, whose walls are made of mud, with no doors, no locks, no heating, no ventilation and no maintenance, also such documents will mold, stick together and become useless after just a few years (or just one bad winter, or one rainy season).

So, the concept of “static document” is very relative and poorly defined, as a small talk commonplace.

A “static” piece of paper implies quite a lot of organisation and technology, which we simply take for granted. But if we lived in an igloo in Iceland or in a hut in the Amazons, such organisation and technology would be very hard to get by.

Also digital files could be protected by default and by design:

of the hardware used to store and manage such files and/or

of the operating system and/or

of access management software and/or

by a combination of the above mentioned technologies.

So, the only novelty about digital documents is:

that humans are able to perceive them only through some tools (computers) that elaborate them logically (and not analogically, like with magnifying glasses, microscopes or symmetric code-decryption) and

that they are extremely more complex than written paper.

Before ubiquitous computing, even digital documents used to live within the borders of a given single piece of hardware or of a single LAN, so that their origin and their context were defined. This is not true anymore, now that a huge amount of data/information/documents are exchanged every day over open networks.

So, it may be that digital data/information/documents (i.e. recordings, inscriptions and documents) get modified involuntarily or unless the author(s) is aware of it. But this was true also of all archived documents, particularly when the archivist was a different person than its author.

This hypothetical difference between analogue and digital data/information/documents (i.e. digital recordings, inscriptions and documents), in the end, comes down to how a specific digital or analogue document has been archived and preserved, and is not an ontological difference.

The current common sense according to which digital documents are inherently dynamic and easily forgeable, if compared to paper based documents, is not an absolute truth. It takes for granted many aspects of the current design of information technology that may change radically, even in the near future.

The function of the signature has not changed. It is true, handwritten signatures cannot be affixed to any digital data/information/document (i.e. registration/inscription/document) in order to validate it. But this is hardly new! Humanity is back to square one: digital documents may be marked with some elaborated sign that has no biometric link to the author of the document (electronic signature), exactly as it was customary from the second Century B.C. to the mid of the nineteen Century A.D.: a time when “scribes” drafted and signed documents on behalf of the author(s). It was the time before graphology had been invented as a (disputed) pseudoscience by Abbé Michon, who became interested in handwriting analysis in 1830 . He published his findings shortly after founding Société Graphologique in 1871.

For about two millennia, signatures were routinely affixed (in a mediated way) by slaves or employees of the signer. No more and no less, like now, when we add our name and surname at the end of an email, typing them on the keyboard. As with handwritten signatures appended by the scribes on behalf of the signer, validation of digital registrations/inscriptions/documents has become a highly complex process, in which social engineering and technology ensure the authenticity and the origin authentication of the digital data.

The social engineering part in Rome was the Forum, where it was compulsory to close agreements and where it was compulsory to open the seals and seal the contracts in order to keep their validity and forcefulness. In the Middle Age, the Forum was replaced by the Notary’s office. Today, social engineering is provided by contractual law, legal compliance and accountancy rules.

In ancient Rome, the technological part was represented by the sealed wooden tables that were conserved by the parties of the agreement (very similarly to a digital document with an electronic signature!). In the Middle Age, the technological part was the professional organisation of archives (with encryption/decryption facilities) by the Notaries and the Clerks of the government. Today, the technologies are ERPs, document management systems and PKIs.

From a global and historic perspective, the reliance on the ability to verify a handwritten signature, through a graphologist, existed for only less than 150 years out of two millennia. This is the reason why the presence of witnesses has remained customary in private negotiations and is still mandatory (in some legal systems) in the most relevant notarial contracts.

The biometric handwritten signature (on paper) had the same lifespan of the tailcoat with the white bow tie!

 

 

Pubblicato il digital agreement, digital documentation, digitalisation | Taggato come , , , | Lascia un commento