Locked in Tour Europe


Author: John Day (Follow on Twitter: @JeanJour)

How many people can you get in a Citroën Deux-Chevaux?

I’ll tell you: Seven.

Empirical evidence for this comes from the first OSI (Open Source Initiative) Working Group meetings in October 1978, taking place in the AFNOR Tour Europe building in La Defense, Paris.

john day fun blog 3 - John Aschenbrenner’s first draft of the Hour-Glass Model as it appeared in N117.We had been meeting all week and it had come time to produce the first real draft the OSI Reference Model, TC97/SC16/N117. The document with the first drawing of the hour glass model, only it was a martini glass. This was my first standards meeting. It was going to be quite an education, but I got a huge surprise that first morning, when I walked in to find an old friend, Kenji Naemura, one of the Japanese delegates, sitting across the table from me.  Kenji had gotten his PhD working on Illiac IV!

John Aschenbrenner’s first draft
of the Hour-Glass Model as it
appeared in N117.

I was staying out in Versailles with one of the CYCLADES guys, Michel Gien. We were putting in long hours all week.  Meeting well past 6 with lots of homework in the evenings. That Thursday, most of us were working on producing that final draft.  Although the meetings were in the basement in Oceanie (Over the years I grew to have a real love-hate relation with that room!), Michel and I were upstairs in the AFNOR offices editing sections of the documents on computers at IRIA to be reproduced for the Plenary in the morning. This was quite a problem for me, who spoke little French. I was using an editor that I knew had all the usual commands, just not in English! And the keyboard wasn’t quite the same either. At some time after midnight, Michel and I realized that nothing more we edit could be reproduced and collated in time for the meeting in the morning.  So we bagged it and went home to get some sleep.

A devoted cadre of 7 people stayed, including Charlie Bachman (US, it was his 7 layers), Hubert Zimmermann (France, WG1 Chair), Kenji Naemura (Japan), Don Shepherd (Canada), Tilly Bayard (AFNOR WG1 Secretariat), John Aschenbrenner and Jerry Foley (both US). Poor Tilly had just been hired two weeks before. This was her introduction to standards.

In those days, copiers just copied.  They would run off the number of copies of a page. Lay them out on a table. And then each person would go around picking up a page to make a complete copy. They finished at 4:30 AM, only to find themselves locked in Tour Europe. Zim found a way out by climbing through a transom and opening the door from the other side.

They then piled everyone into Zim’s green Deux-Chevaux. How they did it I will never know. Charlie and Don were not small guys! I heard Kenji was sitting on Charlie’s lap. Zim dropped everyone at their hotel and headed to his place for very little sleep.  We re-convened at 8:30 that morning, and we did not look good!  But the document was ready for review.

For the next meeting in June 1979 in London, I re-organized the document to more or less the structure it ended up with and file transferred it to UCL over the ARPANET so we would have it for the meeting. So far so good!

The Necessity of Theory in Science Or Big Data is Anti-Science (2)

Author: John Day (Follow on Twitter: @JeanJour)

Part 2 of 2. Read part 1 here.

I had been asked to write a review of a book for Imago Mundi, the premier history of cartography journal. Over the 2014 holiday break, I decided to knock it out.  The book was on Jesuit Mapmaking in Early 18th century China. (I have published a bit on this period.)
The book is primarily about the first major scientific mapping effort anywhere instigated by the Emperor Kiang-Xi and the resulting Atlas. But the book also discussed one of two well-known incidents in the late 17th century where the Jesuits had been pitted against the Court astronomers to see which could most accurately predict three astronomical events: a lunar eclipse, the length of a shadow cast by gnomon at a given time of day, and the relative and absolute positions of the stars and planets on a given day.
The Jesuits produced more accurate results than the Chinese Court Astronomers, resulting in their being put in charge of the Court observatory in Beijing.

Why were the Jesuits’ calculations more precise? It certainly wasn’t because the Chinese couldn’t do the math to the proper precision. After all, the Chinese had been using the decimal system for centuries. (When discussing surds, Needham notes that the Chinese had adopted the decimal system so early it wasn’t clear they noticed that there were irrational numbers.)

Then why?

Because the Jesuits were using techniques developed with and backed by theory.  They didn’t develop the techniques or the theory. Others in Europe had done that. But the “theory” behind it had forced the Europeans to be more precise to back up what they knew, to look more critically at their work, to think more deeply about it, improve their arguments. Hence creating more precise techniques.

The Chinese, on the other hand, had a procedure to follow. They didn’t understand why it was correct other than it had always worked “well enough,” so why look further? (Hmmm, where have I heard that before!) They had been trained that it was the way to do it.  They just knew it worked. And, the procedure didn’t really indicate directions that would lead to how to improve it. (Needless to say, respect for authority and ancestor worship didn’t help in this regard.)

We are seeing the same thing in the systems side of computer science today and especially in networking, where it has been a badge of pride for 30 years that they do not do theory.  In 2001, the US National Research Council lead a study of stagnation in networking research, one quote from their report sums up the problem:

“A reviewer of an early draft of this report observed that this proposed framework – measure, develop theory, prototype new ideas – looks a lot like Research 101. . . . From the perspective of the outsiders, the insiders had not shown that they had managed to exercise the usual elements of a successful research program, so a back-to-basics message was fitting.” [1]

It must have been pretty sobering for researchers to be told they don’t know how to do research. Similarly, the recent attempt to find a new Internet architecture has come up dry after 15 years of work. The effort started with grand promises of bold new ideas, new concepts, fresh thinking, clean-slates, etc and has deteriorated through ‘we should look outside networking for ideas’ (a sure sign they don’t have any ideas when, in fact, the answers were inside as they always are); to ‘the Internet is best when it evolves,’ (they have given up on new ideas) to ‘we should build on our success’ (It is hard to get out of that box).
When I asked my advanced networking class to read recent papers on the 6 efforts funded by NSF on Future Internet, after a chance to read some of the papers, their first question was, “These were written by students, right?” Embarrassingly, I had to reply that, they had been written by the most senior and well-respected professors in the field.

This is a classic case of confusing economic success with scientific success. They were focused on what to build, not asking the much harder and more dangerous question: what didn’t they understand. They didn’t question their basic assumptions. Even though, fundamental flaws were introduced as early as 1980 and made irreversible by 1986 and compounded in the early 90s.

On the other hand, our efforts, which have questioned fundamentals and forced us (me) to change long held views, have yielded new and often surprising result after new result: that a global address space is unnecessary; reducing router table size by 70% or more; recognizing that a layer is a securable container, greatly simplifying and improving security; that decoupling port allocation from synchronization yields not only a more robust protocol but more secure; etc.

Of course they have also shown that connectionless was maximal shared state not minimal; that of the four protocols we could have chosen in the 1970s TCP/IP was the worst, that of the two things IP does (addressing and fragmentation) both are wrong, that the 7 layer model was really only 3 (well, by 1983 we knew it was only 5), and much of what has been built over the past 30 years is questionable. Of 9 major decision points in the Internet, they have consistently chosen the wrong one, even though the right one was well known at the time.

There are many examples from networking, where not doing theory has missed key insights. A few examples should suffice:

  • It is generally believed and taught in all textbooks that establishing a connection requires a 3-way handshake of messages. However, this is not the case. In 1978, Richard Watson proved that the necessary and sufficient condition for synchronization for reliable data transfer is to bound three timers: maximum packet life-time, maximum time to send an ack, and maximum time to exhaust retries. The three messages are irrelevant. They have nothing to do with why synchronization is achieved. There are three messages exchanged, yes, but there are always 3 messages. They aren’t the cause. Watson then demonstrated the theorem in the elegant delta-t protocol. By not doing theory they missed the deeper reason that it worked and missed that the resulting protocol is more robust and more secure.
  • Many people will tell you that network addresses name the host. That naming the host or device is important. (Several of the projects noted above among them.) As it turns out, it may be useful for network management, but not for communications. In fact, it is irrelevant for communications. If you construct an abstract model and carefully look at what has to happen, you see that what the address names is the “process,” the locus of processing, that strips off the header of the packet carrying the address. The host is merely a container. Well, you might say, there are places where there is only one “process” stripping off that header, so it and the host are synonymous. Yes that case exists and in large numbers. But it is not required to exist in all cases and doesn’t in some very significant ones. By not doing the theory, they missed this insight, which made dealing with VMs very messy.
  • In 1972, we first realized that in peer networks, the “terminals,” now computers, could have multiple connections to the network. In all previous networks, the “terminals” were very simple and having only had one was all that was possible. The advantage of a computer having more than one link to the network is obvious: if one fails it still has connectivity. However, addresses in the ARPANET like all previous networks named the wire to the “terminal,” i.e. the interface. If one interface went down, the network had no way to know that the other address went to the same place. To the network, it appeared to be two different hosts. One could send on both interfaces, but not receive on both (not without re-establishing a connection to use the other address). Addressing the interface made addresses route-dependent. Addresses had to be location-dependent but route independent. The solution is apparent if there is a theory: Which we had in Operating Systems. In operating Systems, Application names designate what program and are location-independent, Logical addresses provide location-dependence, but route-independence (independent of where in physical memory), and physical addresses are route dependent addresses (dependent on accessing the memory). Naming the node, not the interface solved this problem. Not only did it not cost anything but it is significantly less expensive, because it requires between 60% and 90% fewer addresses and router table size is commensurately smaller. All other network architectures developed in the 1970s/80s got this right, only the Internet, which doesn’t do theory, got it wrong.
  • But we still thought that addresses could be constructed by concatenating an (N-1)-address with an (N)-identifier. This seemed natural enough. After all, files were named by concatenating directory names down through the tree with the file name as the leaf. That was until in 1982 when we started to look at the detailed theoretical model of what would happen if we did that. It quickly became apparent that it defined a path up through the stack. Concatenating the addresses made them route dependent. Precisely what we were trying to avoid. The address spaces at each layer have to be independent. Of course it was obvious once you remembered what Multics called a filename: a pathname! But it was doing the theory that lead to recognizing the problem. So why does IPv6 embed MAC addresses in the IPv6 address? Because they don’t do theory.

There are many more examples. All cases where not doing theory lead to missing major insights and improvements. But notice that today we are doing the same thing the Court Astronomers were doing. Our textbooks recount how things work today, which students take as the best way to do things. We teach the tradition, not the science. We don’t even teach how to do the science. We don’t teach what needs to be named and why. Watson’s seminal result is not mentioned in any textbook. (One young professor asked me, why he should teach delta-t if no one is using it. (!) I almost asked him to turn in his PhD! We aren’t teaching the fundamental theory, we are teaching the tradition.)

For a talk on this problem a few years ago, I paraphrased a famous quote by Arthur C. Clarke to read:  Any sufficiently advanced craft is indistinguishable from science.”  (Clark said, “Any sufficiently advanced technology is indistinguishable from magic.”) We are so dazzled by what we can do; we don’t realize that we are doing craft, not science.

Big Data is the same thing only worse. Big Data is accelerating the move to craft and is sufficiently sophisticated to appear to be science to the naive. Correlation is not causality. We create algorithms to yield results, but do we have proofs?   Big data is supposedly telling us what to do without telling us why or contributing to a framework of theory that could lead to deeper more accurate results and likely even deeper insights.

Even Wired Magazine called Big Data the End of Science. Although as usual, they didn’t realize what they were advocating stagnation. Of course, it is always the case that every field goes through a period of collecting a lot of data before it becomes clear what is important and the theory is. This has happened before. But what hasn’t happened before is to advocate that we don’t need theory. It is putting us in the same position as the Court Astronomers in 17th C China. And the rate of change and adoption is far faster now than then.

There are those who claim it is a *new* science!  When it is actually the greatest threat to science since the Catholic Church found Galileo guilty of proving a heathen (Aristotle) wrong. (I never have understood that one!)  The Scopes trial was more circus than threat, though that may have changed in the backward US.

We are taking on the same characteristics seen in Chinese science in the 17th C.  It isn’t pretty and it isn’t just networking.  Read the last 5 chapters of Lee Smolin’s The Trouble with Physics.  He sees it there!  And others have told him they are seeing it in their fields as well.

Big Data has us on the path to stagnation, if we are not careful. Actually, we are a long way down that path…

  1. Looking over the Fence at Networking, Committee on Research Horizons in Networking, National Research Council, 2001.
  2. Needham, Joseph. Science and Civilization in China, Cambridge University Press, (Vol 1- Vol. VII, Book 1) 1965-1998.
  3. Smolin, Lee. The Trouble with Physics: The Rise of String Theory, the Fall of Science, and What Comes Next, Houghton-Mifflin, 2006.

Be Conscious of Your Bias

Author: Avis Yates Rivers (Follow on Twitter: @SitWithAvis)

lalaLast month I shared what some leading companies are doing to combat unconscious bias in the workplace. This month, I will do a deeper dive into why that’s important and why it matters.

Quite simply, unconscious bias negatively impacts an organization’s financial results. According to several studies, diversity benefits innovation and the bottom line in the following ways:

  • Increased sales revenue, more customers, bigger market share
  • Higher-than-average profitability
  • Greater return on equity and return to shareholders
  • Greater potential for creativity, sharing of knowledge, task fulfillment

In addition, groups with greater diversity solve complex problems better and faster than homogenous groups.

Several researchers set out to measure the collective intelligence of a group; they wanted to know if it could be explained by the intelligence levels of group members or not. They were surprised to find that one of the key predictors of a group’s intelligence level was the number of women on the team; the more women on the team, the more the team’s collective intelligence rose, up to a certain point. Individual intelligence of group members was not a predictor of collective intelligence.

This is an interesting finding because it absolutely counters the “rock star” approach to hiring. We all love the stories of a brilliant loner or a couple of guys dreaming up a tech company in somebody’s garage, but the fact remains that the vast majority of technology is developed by multiple people as part of a team. So whether you’re hoping to be acquired or doing the acquiring, it’s smarter to build good teams than look for the one brilliant standout.

In fact, according to Dow Jones Venture Source, analysis of more than 20,000 venture-backed companies showed that successful startups have twice as many women in senior positions as unsuccessful companies.

Diversity also helps companies to grow. Tech companies led by women delivered higher revenues using less capital (30-50% less), and were more likely to survive the transition from startup to established business. It has also been found that women led companies deliver higher revenues using less capital.

So, when we cut to the chase, we know that:

  • Minority Groups Aren’t Broken
  • Majority Groups Aren’t The Enemy
  • The Culprit = Societal Biases We All Share

Society is biased about gender and technology. Period. There are things we can do, and lots of things we shouldn’t do.

Things we should do include:

  • Become a male advocate and inspire other men to do likewise
  • Audit your physical office space for implicit biases
  • Assure inclusive team meetings and social events
  • Examine performance reviews for unconscious bias
  • Remove biased language from job descriptions and job postings
  • Evaluate interview questions and include diversity in the interview process
  • Engage in unconscious bias training for all managers and supervisors

Some things Not to do include:

  • Don’t lower your hiring standards, just make sure you are hiring for the things that matter
  • Don’t slap a boilerplate diversity statement on your job ads
  • Don’t form development teams with just one diverse engineer
  • Don’t keep looking for diverse candidates in the same places you’ve always recruited
  • Don’t depend on underrepresented employeesto advance your diversity goals

And most importantly: Don’t give up; this is a long distance race, not a sprint!

The necessity of theory in science, or Big Data is anti-science

Author: John Day (Follow on Twitter: @JeanJour)

Part 1 of 2.

There has been considerable hype surrounding Big Data of late as if it were something really new. The histrionics have gotten quite deafening. I have characterized the current fad as the 6th Generation of Big Data, starting with first generation in the 1830s. After a carriage accident made it impossible to go back to sea, Matthew Maury was made head of the US Naval Observatory in Washington D.C. There using the logbooks returned by Naval Captains after each voyage, he collated data and was able to discover previous unknown currents in the Atlantic and patterns in the wind that allowed shorter sailing times by days or even weeks. The second generation was Sears Roebuck, who in 1910 built a facility on the outskirts of Chicago to fill orders. At the time, Sears had no brick and mortar stores. The catalog was their Web. They were filling 100,000 orders a day and moving a million pieces of merchandise a day in 1910! (You could order everything from nails, clothes, carriages to an entire house! The houses (for which there was a whole separate catalog of styles) were pre-cut (not pre-fab) and shipped in several installments to give you time to build each phase.)

The 3rd Generation would be 1940s Bletchley Park and the advent of the computer. Von Neuman’s interest in computers was to get more data to see the patterns in differential equations he was working on. The 4th Generation would be the 1960s with Illiac IV and the advent of supercomputers, and the 5th Generation would be the 1980s and the establishment (in the US) of supercomputer centers. And now we turn the Moore’s Law crank yet again and we are at the current fad with racks of machines filling huge buildings and with millions of sensors spread around us.

But this last generation is the most dangerous, the greatest threat. Some have even called it a “new science” (if it is then so was the microscope was a new science) or the end of science (all we have to do crunch all this data and we will get the answers). It is closer to the latter than the former, but not for the reasons they think. Big Data is accelerating us toward stagnation. Let me explain:

A different approach

As a grad student, I discovered Joseph Needham’s magnum opus, Science and Civilization in China. It isn’t just a book. It is a multi-volume (with some volumes having multiple books) encyclopedia of science and technology in China up to about 1750, when it becomes too difficult to determine what was purely Chinese and what was influenced by Western contact.

Why was I reading such things? First of all, it was interesting! What other excuse does one need!?
Second, any system designer or architect must collect models to avoid the “If all you have is a hammer, . . .”[1] syndrome. And the models and the accomplishments, I found in Needham were fascinating: a very different approach to many problems than found in the West.

A couple of examples will illustrate what I mean:
In ship design Needham points out that both East and West used nature as a guide. The West used fish; China used waterfowl: Much more appropriate for something at the interface of air and water. Fish are a good model for submarines, but ducks are a better model for boats.

In China, the axis of a windmill is vertical and the vanes hang down. Not only is the gearing simpler, but it is always in the wind. It doesn’t have to turn into the wind.

China had Pascal’s Triangle centuries before Pascal.

Seventy years before Vasco DaGama in 1497 clawed his way down the African coast and rounded the Cape of Good Hope to put into Mombasa on the East African coast, the Chinese Admiral Zheng He paid several visits to Mombasa with a large fleet of huge ships with water tight compartments and other advancements, just out on a good will tour to say the Emperor thinks all of you are wonderful and if you would like to send back tribute to the Great Ming Emperor that would be fine.

Interesting isn’t it?

Scientific theory

It is appropriate that we are meeting in Portugal, which had such a major role in the Age of Discovery. Henry the Navigator’s great accomplishments earlier in the 15th century had left a legacy for Da Gama to build on that the Chinese didn’t have. As Needham points out, there was one thing missing in Chinese technology: There was no scientific theory.

It is all technique, technology; it is an artisan tradition, craft. What do I mean by scientific theory? Robert MacArthur, one of the founders of biogeography, distinguished Natural History from Science in that Natural History describes but Science predicts. The Chinese had certainly achieved a critical mass of knowledge that should have lead to theory. But for some reason (still debated by scholars), there was no theory. Some say, it was because they were so practically minded. However, because there was no theory, there was a tendency to lose knowledge: When Matteo Ricci, the first Jesuit into China in 1600, he initially thought they had brought the knowledge that the earth was round.[2] When quite to the contrary, the Chinese had known it centuries before but the knowledge had been lost. But the lack of theory had another far worse consequence. By the late-Ming dynasty (16thC), stagnation had clearly set in. Artisan traditions are predicated on doing what had been done before; improvements come by trying things, not by using theory to point the way.

Needham attributes both the lack of theory and the stagnation to the fact that merchants had very little status in China, and virtually no power.  All power was with the Emperor. In other words, Needham saw commerce as the driver of technology and the reliance on government funding as leading to stagnation.  But as we have seen more recently, the short ROI of commerce can also lead to stagnation. Everyone is looking for a technology enhancement that will yield a quick result, rather than delving deeper for more fundamental results that could yield far more but may take longer and also threaten to undermine existing investment.

Euclid’s accomplishments

I would tell historians, that another reason there was no theory in China was that there was no Euclid. They would give me blank stares as if to say, “Huh!?” But historians don’t see Euclid as we do. Clearly not Euclid for geometry, the Chinese understood geometry quite well, but as an example of an axiomatic system.

As we all know, Euclid’s accomplishment is the Holy Grail of science.[3]  The ultimate goal in any field is to be able to reduce it to a small number of assumptions from which all else can be derived. Newton did it for mechanics; Maxwell, for electricity and magnetism; CERN and the physicists are trying to do it for everything. Not that any scientist sets out to do that, but every scientist worth his salt is always open to that flash of insight that points toward a unification.

That of course begged the question: Why did the West have Euclid!? Why did Euclid do what he did? What pushed him to create such an elegant edifice!?

Of course, we are lucky just to have Euclid’s Elements let alone know anything about who Euclid really was, how the Elements came about, What made him want to organize things that way? Why was he looking for such an elegant solution? etc. Lost in the sands of time.

As it turned out, I didn’t need to know why Euclid did it to understand why. The insight to Euclid came, reading a Geometry book by Heilbron, where he notes that while several civilizations developed mathematics, only the West developed the concept of proof. The others have recipes, examples that are used as patterns, (dare I say algorithms?) but not proof. It is clear that the Babylonians had the Pythagorean Theorem, but they didn’t have a proof for it.

This answers the question! How?

Challenging proof

What do you do when you challenge a proof?  You question the assumptions. Continually challenging the assumptions leads to the minimum set of assumptions that will suffice.  Hence, an axiomatic system. Then it is a short step to ask what results can be derived with just the assumptions, and then what do those enable and so on and you have the Elements!  (BTW, my favorite disposition of the development of meta-mathematics is Chapter 1 of Bourbaki’s Theory of Sets. It is delightful!)

Why is theory important to science? Bear with me. On June 14, in my next blog I will elaborate on this…


[1] As Mike O’Dell says, “When all you have is a hammer, everything looks like your thumb!”

[2] This is not the huge discovery we generally think it is. Since ancient times among the educated classes, it was well known. One merely has to watch a ship sail over the horizon and notice that the hull disappears before the sails do (the origin of the phrase “hull down”) to know the earth is round. No one funded Columbus not because they thought they could sail off the edge of the world, but because they knew it was round, knew its circumference and knew they didn’t have ships with sufficient range to make the voyage. Columbus fudged the numbers to make them look feasible, found someone with money that believed his math, and then got very lucky when there was a continent in the way!

[3] A carry over from when the distinction between math and science was not as clear as it is now.

The growing importance of digital assessments

Author: Ingrid Melve (Follow on Twitter: @imelve)

NTNU, a lovely technical university in Trondheim, Sweden. Photo by Eirik Refsdal.

NTNU, a lovely technical university in Trondheim, Norway.
Photo by Eirik Refsdal.

I admit to being fascinated by universities and their IT systems, and not least their Internet infrastructure. I realize that I should not have a love affair with several institutions at once (and to be precise, an institutionalized love affair is more properly a marriage, of which one should definitely not have more than one at a time). But the fascination offered by the Internet itself is hard to resist.

One of the best things about working for a national research and education network is the people I meet. Whenever there is a really challenging situation involving the internet, someone at a university will pick up a keyboard or a phone or a coffee cup and involve us in the discussion. This trust is not built easily, and it needs to be reinforced by working together and sharing information continuously.

Our latest hard-to-solve issue has been investigating what happens when assessments move online in the cloud, and the students bring their own devices to the mix.  Why all the interest in this area? There is combination setting up a perfect storm:

students protest the use of paper for text processing, something which is understandable, especially since upper secondary schools have equipped all students with laptops for the past five year, and upper secondary assessments are digital

move to the cloud makes it easy to deploy large scale solutions for BYOD environments

eduroam and wifi is everywhere, making it possible to have clients on and off campus, even for high availability and high security situations

restructuring universities and merging institutions creates opportunities for procedural changes

web everywhere and consistent user experiences makes it easier to implement solutions across platforms

No less than five national working groups have looked into various aspects of the issue and worked on documenting best current practice:

1) how to build infrastructure for examination sites both on and off campus
2) requirements for student PCs (BYOD)
3) defining a coherent assessment process, with workflow and description of what is involved in assessments both on the learning and the administration side
4) ICT architecture for assessment: information architecture and process overviews (recent results are published)
5) integration requirements, with testing and live operations of a shared integration point (this group is not done, as they are still working on the recent input from the other groups)

Does this work have impact? Well there is proposal to change national legislation to ensure that BYOD is possible, requiring each student to bring a PC to university (currently in hearing until the end of summer). This was not only proposed by the working groups, but also supported by The Norwegian Association of Higher Education Institutions.

Another example is that new buildings are following the recommendations for building infrastructure. We are not done, but have more and more institutions coming in to the work and wanting to contribute. And we suspect that there are issues that may challenge us. To me it serves as an illustration of the power of a community speaking with a coherent voice.

The willingness to work together and share best practices never stops to amaze me. Pooling resources makes sense in a small country like Norway. Maybe shared work also comes with bragging rights, being able to point to specific parts contributed. Of course there is a bit of common sense in sharing the work, but common sense is a lot less common than one would think from the name. My conclusion is that contributing to the community is a joy. And hard work.

Unconscious bias in the workplace (2)

Author: Avis Yates Rivers (Follow on Twitter: @SitWithAvis)

biasLast month I explained in detail what unconscious bias is and why it matters. This month, I’ll deal with how unconscious bias (or micro inequities) disproportionately affect women and people of color in the workplace. I’ll also explore what some companies are doing to combat this phenomenon.

If you didn’t read last month’s blog, I encourage you to do so before reading this one. It really lays the foundation and explains what unconscious bias is and why it is important to recognize and deal with it.

Certainly women have made inroads in corporate America, but a Pew Research Center survey released recently points at why women struggle to climb to the corporate world’s highest ranks—and often tone down their ideas, hide behind an agreeable façade, or leave the workplace altogether.

In fact, studies show that technical women leave the workplace at middle management 56% of the time. That’s twice the quit rate of men. The reasons most stated for leaving include an unwelcome environment or bad supervisor relationship.

Four out of 10 surveyed in the Pew study said that there are double standards for women seeking the highest levels of leadership in politics or business. They added that women have to outshine their male counterparts—and more than one-third of respondents believe the electorate and corporate America are not ready to put more women in top leadership positions.

Why is that?

Quite simply, as a result of a lifetime of absorbing the same images and representations, men are more likely, on television and elsewhere, to be seen in the workplace. It affects the way men see women and the way women see themselves. It’s not just men but women too who have ingrained expectations of workplace roles.

When you look at the representations of African Americans or Hispanics in leadership roles, the numbers are even more distorted; in fact, they are woeful! It is just in the most recent television season that we have more than one prime time television show or series that stars an African American or Hispanic in the leading role.

Consequently, women and people of color are not seen in roles of leadership – either in the media or in the workplace. As such, a bias is established in the minds of hiring managers that prevents them from selecting qualified female or minority candidates to fill certain roles.

The first step to solving this problem is recognizing that it exists and that it robs the organization of creativity and productivity. That recognition begins at the top as demonstrated by several corporate leaders of late. Here’s how they are choosing to solve the problem of unconscious bias in their companies:

One simple thing some companies are doing to eliminate the potential for bias to creep into their hiring practice is to strip resumes of names and other identifying information and just assign each resume a number.

Roche Diagnostics, a subsidiary of pharmaceutical giant Roche Group, is aiming to make its managers more aware of unconscious bias. It held two bias acquaintance sessions with its senior and middle managers in recent months as well as a third at its national sales meeting this past January.

According to Bridget Boyle, VP of HR at Roche Diagnostics, in addition to ongoing training to highlight unconscious bias, the company broadened its recruitment and promotion policies in 2013. More than half of its lower level employees were women but their presence began to thin in middle management.

To spark change, the company instituted a mentor policy that paired 150 sets of employees over 18 months. It’s also strengthening maternity and paternity benefits and assuring diverse slates of candidates for the 750-800 openings it fills each year.

Royal Bank of Canada started an effort in May 2013 to raise awareness of bias among its 78,000 employees worldwide. Dr. Mahzarin Banaji, a Harvard University social ethics professor who co-authored Blind Spot: The Hidden Biases of Good People, has held sessions for about 1,000 of RBC’s executives to help alert them of their biases.

In addition to these meetings, employees have access to tests developed by Harvard to assess their unconscious biases and apply their personal findings in workshops. These sessions, says Norma Tombari, RBC’s director of global diversity, are continuing in 2015 as part of the company’s “entire talent management decision-making.”

Intel’s CEO, Brian Krzanich, announced at the most recent Consumer Electronics Show that he was committing $300 million to ensure his workforce achieves ‘full representation’ by 2020. This is a bold move not seen before in diversity initiatives, and he has put himself and Intel out front in terms of public accountability.

To be sure, their strategy needs to address a myriad of complex issues, but I applaud Mr. Krzanich for his bold leadership.

These and other bold moves are what is needed to improve the situation. Despite such concerted efforts, change won’t be sudden, however. It has taken years to get conditioned this way. It’s a learning process that has to be diligently undone over a period of time.

The good news is that companies have finally begun to recognize and acknowledge they have a problem. As we all know, that’s the first and most crucial step to achieving a change in behavior.