Friday, April 23, 2010

Paying For Art

denherr asks, How is an artist, such as a musician or writer supported to create and contribute if his creations have no economic value, and his individual creativity is limitlessly remashed into collective works? How does he feed himself? I love learning from your free presentations but where ultimately does the money(or oil) come from for your beer and your plane tickets to the conferences all over the world?

denherr, this is an old argument, but in brief, there are many ways to pay artists without levying a per-play or subscription fee on works, and without requiring royalties. In fact, most people in the world manage to make a living without these special privileges.

Take a brick-layer, for example. Every work he creates is an original work, but he doesn't get a patent or copyright for it, can't prevent other people from copying it or selling it or giving it to friends, etc. A brick-layer is paid for the time he lays bricks.

A chef, even a famous chef who creates unique dishes, is in the same situation. Recipes are shared freely and are almost never owned (indeed, cuisine would cease to exist if no person could duplicate a recipe). Chefs don't charge royalties, don't get copyrights, and yet make a very good living.

The people who write technical documents, who create commercial jungles, who do graphic arts or commercial work are also in the same situation. They again must surrender any royalties or rights to the work they create. But they do not die of starvation or fail to pay the rent. They make very good, sometimes even wealthy, salaries.

The situation where nobody pays a musician or artist except by buying individual copies of his or her work is unique. It probably wouldn't exist at all, except that it was created by music and book publishers as a means of underpaying artists (most of whom actually do struggle to make a living - so much for the beneficial effects of copyrights and royalties!).

If it weren't for the whole publishing and copyright mess we find ourselves in, artists would probably make very good livings, earning and keeping the entire profits from their live shows (instead of repaying advances they had to obtain from their publishers). Fans and patrons would pay for specific works, and people would line up to pay enough money to sponsor, say, new Lady Gaga song.

Most people in the world get paid for the time and effort they put into something. There's no reason artists can't be paid this way, except for the fact that publishers want to keep ripping them off.

This is how I get paid. I don't sit on my work and demand royalties; I share it as widely and freely as I can. This has resulted over the years in my being hired for a series of positions where I am paid to create even more work and share it (though occasionally my employers grumble that they should sit on the work and collect royalties, not realizing that this would in fact restrict my ability to create new work).

By sharing my work freely, people around the world are able to see it, and they willingly pay for me to come and speak to them. I do not collect speaker fees, but I do require that they pay my expenses, because otherwise I could not afford to travel to their cities. We both benefit, because I then use these trips to produce work that we share with other people around the world, and the cycle continues.

You might think, it's not a very good deal for some organization to pay several thousand dollars to fly me to their city. But consider the cost were they to buy books from me instead. They could get maybe 30 or 40 copies of an academic text for the same amount. This way, they get all my content I ever create for free, as many copies as they would ever need. It's actually an excellent deal for me.

What does my employer get? My employer is the government of Canada (it might have been some company, or a university; it just happens to be the government). They get the reputation from sponsoring my work, they get significant input into what I work on and where I work, they get me to contribute some of my work to Canadian companies (resulting in outcomes like this). I promote Canadian culture and values in Canada and around the world, stimulating business (and maybe even tourism) for Canada. It's a good deal for my employer.

What don't I get? Filthy rich. There's never going to be a million dollar payday in my life - no album that goes platinum, no book that hits the best-seller list. But you know what? I'm OK with that - because giving up the decent life I have for a longshot like fame and riches is a sucker's game. And for those of us who do anything outside popular culture - anything philosophical, academic, esoteric, radical or fringe - fame and fortune will never ever happen. Not only would I have to give up my nice home and salary, I would have to give up the things that really matter to me - my art, my creativity - to play this sucker's game. It`s not worth it.

So that`s how artists can be paid. We can pay them the same way we pay bricklayers, the same way we pay chefs, the same way we pay me. And what we get for that, I would wager, would be a beautiful thing.

Tuesday, April 20, 2010

What's Already Been Proven

Responding to David Wiley, who cites John Anderson and Lael Schooler’s 1991 Reflections of the Environment in Memory.

This is the same John R. Anderson who wrote ‘Human Associative Memory’ with Gordon Bower, which describes the associative structures fundamental to my own work and also to associationist reasoning generally. (another Canadian, too).

In other words, this sort of work *is* the “empirical work done to shore up the nascent theoretical framework called connectivism.” I suppose more of it can be done; I cite it when I come across it. I can’t speak for George, but it’s not like I just made some stuff up and called it a theory.

Related to this, when you ask questions like, “what are the nodes that are connected in connectivism?” I refer you, not to hand-waving generalities, but to things like Boltzmann engines, which draw upon the thermodynamics inherent in the gradual build-up and release of electrical changes in neurons. There’s plenty of solid empirical research here, some solid mathematics, and even a spiritual dimension if your so inclined (my various references to ‘harmony through diversity’ are directly grounded in the Boltzmann machine). The average human is more complex than the average neuron, of course, and different mechanics apply. But within some bounds, the same sort of descriptions that apply to neurons also apply to humans – the phenomenon of a ‘propensity to respond after repeated stimuli’, for example, can be observed in both.

That said, what seems to be important is the set of connections, rather more than the particular physical make-up of the nodes being connected. There is not any evidence that find that stipulates that only certain kinds or essences of nodes can be connected (Thomas Nagel notwithstanding). That said, there is a requirement that the entities be in some sense *physical*, because the nature of a connection (as I’ve often stated) is that a change of state in one results, via the connection, in a change of state in the other (that’s why graph theory, nodes and edges, constitutes only a virtualization, and not an instantiation, of network learning).
For while I realize that good-old SR looks like paired associate learning, you can’t substitute words, like ‘Paris’ or ‘France’, for two nodes. A word, in and of itself, has no causal property; only the tokening has a property. This is important because a word has no discrete token inside a human mind, and therefore, while we can *represent* an association between ‘Paris’ and ‘France’, we cannot *instantiate* it. *That* is why we prefer complex networks (and what accounts for the generally anti-cognitivist stance of my own work).
Now I am perfectly happy to talk about simple networks. One node, a connection (not merely an ‘edge’), and another. We can represent nodes as simply as possible – on/off (though in reality many more states are possible).

We can represent different networks of this sort. A connection as simply as possible (on/off) such that if node A is on and connection is on, node B turns on (that’s an excitatory (or Hebbian)connection). A connection as simply as possible (on/off) such that if node A is on and connection is on, node B turns if (that’s an inhibitory connection). Etc. What are the mechanisms for these? Could be electric switches, could be chemical reactions, could be dominos. If you look at Rumehhart and McClelland’s ‘Jets and Sharks’ experiment, you see we can create pooling and differentiation with these kinds of connections.

If the nodes aren’t simply on/off, if the connection is represented with a probability function, etc., based on different properties, you get different types of networks.

All of this is known, old, well-proven. It doesn’t need to be proven all over again, just for education. Quite the opposite. Education should, for once and at long last, learn from what has already been proven.

Friday, April 16, 2010

Network Equity (1) - Selective Attraction

You are probably one of those who believes that people become rich or famous or powerful because they deserve to be rich or famous or powerful.

Though there may be some minimal conditions (you have to be at least literate, for example, and you may, like George W. Bush, need to be at least a 'C' student) this assertion is for the most part false.

In fact, aside from any unfair advantage an individual may gain (as a result of already being rich, or being related to someone who is already famous, say) the most crucial condition is luck. Being in the right place at the right time.

To illustrate this, let's look at your iPod or MP3 player. You have, say, 1000 songs on your player. If you choose 'shuffle' the player will choose which song to play. And the player also counts how often you've played each song.

If it's all completely random, then if you play 10000 songs, then each song should have played about 10 times. Chance and probability means they won't work out to exactly 10 plays each, but there won't be a lot of variability. Play a million songs, and each song will play a thousand times each, give or take a few dozen.

If we ordered the songs by how often they were played, and mapped out how frequently they were played, our chart would look like this:

Of course, it won't look exactly like that. You might choose to listen to your favorite songs a few times more. And, if you're like me, there are some songs you just want to skip past. So there will be a bit of a head and a bit of a tail to our graph.

Still, overall, things are pretty balanced. Which is what you would expect - and what you would want, of your random song selection.

Suppose, however, you want to tilt the playing field a bit. Instead of selecting completely randomly, suppose you include a slight preference to songs you've already played. To, say, a song would have a 10 percent greater chance of being selected if a song were played 10 times more than another.

Sounds great, right? The best songs will play more frequently, and the duds will never be played at all. Right?

Well, no - not unless you have already seeded all the 'song play' values first. Otherwise, if you let your new 'tilted' random song selector to operate on its own, a funny thing will happen.

The first few songs - whatever they are - will receive a slight benefit. This slight benefit will multiply, bit by bit, as the song selector picks them over other songs. It will grow and grow and grow until the selector is playing almost nothing but the first few songs. Any song not lucky enough to have been selected first will not be selected at all.

If we create the same graph we did before, we get what is known as a 'power law' graph, where the few that are played frequently create a big spike, and the remainder, that are played rarely, if at all, form a long tail.

Well, no problem, right? This is exactly what happens with popular songs, well-known websites, the distribution of wealth, and many more things in society. It's the most natural thing in the world.

Sure. But - crucially - what would happen if you tried the same experiment on your iPod a second time, you would get exactly the same spike, but with different songs. Do it again, and you get different songs again.

This system will always select a favorite and promote it to the top of the big spike, but this selection is purely random; it has nothing to do with the song itself. Even if you're selecting a song yourself to listen to from time to time, the preferential selection will still promote song after random song to the top of the heap.

What would you say, then, about the song at the top of the big spike? That it 'deserved' to be there? That it had the greatest 'merit' of any of the songs? No, of course not. You can't say anything about the song, except for the fact that, once it got a bit of an advantage, it was able to take off.

Even if you seed the selections with some favoured songs, the results will be all out of proportion over the preference. The longer the selective attractor runs, the greater these songs will be favoured. You might not want to listen to the same ten songs over and over - they're not that much better - but that's what will happen if the system is allowed to run long enough.

In fact, that's exactly what happens on popular radio. People tend to prefer to listen to a song that's familiar. This system, allowed to propagate long enough - over three decades, say - produces a global preference for a small set of 'classic' songs - and thus is born 'classic rock radio', which is almost all that can be found in some markets.

Are these songs so much better than all others that they should be played almost exclusively? No, of course not. Go to Indie or free music sites like Jamendo and you'll find some songs you like just as much as your classic rock favorites (at least, they will be, once you've listened to them a few times).

This is a phenomenon that can exist in networks generally. Yes, it is a natural phenomenon. As writers like Albert-László Barabási show, we see these in many places, from the formation of river systems to the distribution of limbs in a tree to the distribution of trails and roads.

But these effects do not exist because one option is better than the other. They exist because one option was lucky enough to be first, to be in the right place at the right time.

Whether it is the popularity of a politician, the influence of a song, the number of links received by a website, the viewership of a viral video, the wealth of a corporate tycoon, or any of a hundred other power law phenomena, the predominate characteristic of these is not greater quality or virtue or value, but a simple multiplier effect that could just as easily have chosen something else.

Don't equate wealth with knowledge, fame with insight. Don't equate the accidental properties of a network phenomenon to intrinsic worth or value. The differences between us are far slighter than the disparities in wealth, power or fame would indicate. We are all much more similar than we are dissimilar.

Monday, April 12, 2010

Collaboration and Cooperation

I was asked, by email: I was very interested in your distinction Groups vs Networks. Can we say it has a direct parallelism with the distinction Collaboration vs Cooperation? In terms of enabling student’s freedom, how would you describe each one?

I believe that you can draw a connection between the two distinctions. Collaboration belongs to groups, while cooperation is typical of a network. The significant difference is that, in the former, the individual is subsumed under the whole, and becomes a part of the whole, which is created by conjoining a collection of largely identical members, while in the latter, the individual retains his or her individuality, while the whole is an emergent property of the collection of individuals.

I have identified four major dimensions distinguishing the role of the individual in collaboration from the role of the individual in cooperation:

- Autonomy - in the case of a collaboration, the actions of the individual are determined with reference to the needs and interests of the group, and are typically directed by a leader or some sort of group decision-making process. Groups often have a 'common vision' to which each member is expected to subscribe. In a cooperative enterprise, each individual participates out of his or her own volition, and acts according to individually defined values or principles.

- Diversity - in the case of a collaboration, diversity of aim or objective is not desired. While individuals may engage in different activities, each is understood only in terms of the common end or goal, as in the production of a car on an assembly line. It is important that people speak the same language, sing from the same songbook, or otherwise exhibit some sort of identity with other members. In the case of cooperation, there is no common element uniting the group; rather, each individual engages in a completely unique set of interactions based on his or her own needs and preferences. There is no expectation even of a common language or world view.

- Openness - in the case of a collaboration there is a strong sense of group identity, a clear boundary between who is a member and who is not, often to the point of excluding non-members and even hiding large parts of the group's activities from view. In a network, by contrast, there is not a clear boundary or even a recognized set of members. While membership in a group is an all-or-nothing thing, membership in a network may be tenuous, drifiting in and out, like a lurker at the edge of a conversation.

- Interactivity - in the case of a collaboration, information typically diffuses from the centre to the periphery as people receive their 'marching orders'. A 'broadcast network' is more common of a collaborative organization. Management, structure and hierarchy govern the connections and flow of information. Group communication dynamics are characterized by a 'big spike', whether or not there is a long tail; that is, a few members will have an influence disproportionate to the rest, and will use their positions to define the 'common' or 'shared' values that will be held by the rest of the group. In a cooperative enterprise, by contrast, there is a relative equality of communications and connectivity; there will be no big spike or single centre of influence.

In general, the properties describing those of collaborative relate to mass. The creation of movements, whether nationalistic, religious or political, are based on amassing large numbers of people united under the same sign, set of beliefs or statement of principles. These mass activities are often instantiated in the figure of one person, a leader or inspiration. The same belief is held by each of the members, who will also share a certain language or jargon, and this belief propagates from one person to another through a process of diffusion, conversion or enrollment into the case.

The properties describing a cooperative, by contract, relate to organization. The creation of networks, whether they be economic or commodity marketplaces, infrastructure or communication systems, ecologies or ecosystems, social networks, local communities, and the like, is based on sets of interactions between members where these interactions form, as a whole, a unique, distinct and recognizable entity note based in the individual actions, beliefs or values of any, or even all, of the individuals, but rather exhibiting its own logic based on is organization.

It is interesting no note how the traditional 'process' freedoms relate almost entirely to the formation of groups or collaborations. They are not individual freedoms so much as a set of mechanisms that allow the creation and formation of new groups (which was a stunning advance for its time, an era when typically only one group at a time would be allowed to legitimately exist). Consider how 'freedom of assembly', 'freedom of the press' and even 'freedom of speech' allows people to create new groups, while 'freedom of opinion or religion' allows a person to join new groups.

In terms of freedom, it is my belief that a cooperative network engenders greater freedom. This is because, even though process freedoms (freedom of the press, freedom of assembly, etc.) may be the same in the two models, and indeed, essential for each of the two models, the network model allows more freedoms in other dimensions. In particular, an individual working cooperatively has greater empowerment; not merely the right to freedom of expression, but a channel to connect to others, and the means to live according to the beliefs expressed. And the individual in a network is free from a variety of pressures, pressures to conform, pressures to stipulate to a belief or creed, language requirements, nationality requirements, and the rest.

Monday, April 05, 2010

Personal Knowledge: Transmission or Induction?

I'm going to use an oversimplified example from electricity to make a point. I still think there is a deficiency in the personal knowledge management model being discussed in various quarters. Let me see if I can tease it out with the following discussion.

Harold Jarche points to a diagram Silvia Andreoli adds to his last post on personal knowledge management. Here it is:

Now the activity happening at the centre is becoming more sophisticated, with an expanded list of processes happening, to convert data into knowledge. I don't want to focus on the particular types of activity - that's just mechanics. I am more concerned on what might be called the 'flow' of information from data to knowledge.

So let me strip down the details and present an abstract version of the model.

In a nutshell: does the data itself become knowledge, or does the data lead to something else becoming knowledge? Let me use my electrical analogy to make the point.

In what might be called the 'naive model' (not disparagingly) we have a direct circuit from input (data) to output (knowledge). The purpose of the process in the middle is to filter, transform, reshape, and otherwise improve the data, but ultimately, to pass it along. Like this:

Now presumably, what is happening here is the data is coming in from outside the person and the knowledge is being stored or in some way impressed in the head or mind; there may in addition be an output in the form of a transmission or creative act, producing the freshly minded data as publicly accessible 'knowledge'.

But I'm not at all sure this is the correct model. I don't think there is a direct flow from data to knowledge. My model looks more like this:

What we have here is a model where the input data induces the creation of knowledge. There is no direct flow from input to output; rather, the input acts on the pre-existing system, and it is the pre-existing system that produces the output.

In electricity, this is known as induction, and is a common phenomenon. We use induction to build step-up or step-down transformers, to power electric motors, and more. Basically, the way induction works is that, first, an electric current produces a magnetic field, and second, a magnetic field creates an electric current.

Why is this significant? Because the inductive model (not the greatest name in the world, but a good alternative to the transmission model) depends on the existing structure of the receiving circuit, what it means is that the knowledge output may vary from system to system (person to person) depending on the pre-existing configuration of that circuit.

What it means is that you can't just apply some sort of standard recipe and get the same output. Whatever combination of filtering, validation, synthesis and all the rest you use, the resulting knowledge will be different for each person. Or alternatively, if you want the same knowledge to be output for each person (were that even possible), you would have to use a different combination of filtering, validation, synthesis for each person.

That's why personal knowledge is personal. Each person, individually, presumably attempting to approximate the production of knowledge output exhibited by other people who know (where this knowledge output may be as simple as the recitation of a fact or as complex as a set of expert behaviours in a knowing community), must select an individual set of filtering, validation, synthesis, etc., activities.

And probably, the best (and only) person who can make this selection is the person him or her self, because only the person in question knows and can make adjustments to the internal circuit in order to produce the desired output. That doesn't mean we can't suggest, demonstrate, or in other ways mediate these adjustments.

So why do I think the induction model is more likely to be correct than the transmission model?

What characterizes induction is a field shift. Though we can track the flow of energy from input to output (which is why no causal laws are broken) the type of energy changes from electrical to magnetic and back. Hence, the carriers of the energy, the individual electrons, never connect from beginning to end.

A similar sort of field shift happens in knowledge transmission. When we think, we convert from complex neural structures to words. We output these words, and it is these words that constitute (in part) the data that enters the system (other forms of data - other audio-visual inputs, are also present). This data, in the process of becoming knowledge, is not stored as the physical inputs (we do not literally store sounds in our brains) nor even echoes of them.

Rather, what happens is that, as the cascading waves of sensory input diffuse through our neural net, they have a secondary, inductive effect of adjusting the set of pre-existing neural connections in the brain. It is this set of neural connections that constitutes knowledge, not the set of signals, however processed and filtered, that interacted upon them.

At a certain gross level this should be pretty obvious. When we examine the brain, we do not detect sounds or images, nor even (beyond the most basic sort) echo-like constructions or neural arrangements that correspond to them. Nor either do we detect sentences, syntical-like structures, or anything similar. Therefore, whatever knowledge is, it has undergone a field shift.

But we do infer to the existence of such structures, and we infer to them on the basis of what appear to be obvious productions of knowledge. Not only can we write text, draw pictures, and speak descriptions, we have actual memories and dreams that have the same phenomenal qualities as those we experienced in the first place. This would not be possible (to echo Chomsky) were they not stored in the mind in the first place. Would it?

Only if there are no field shifts. But if there are field shifts (which would explain why we cannot observe in the brain what we so obviously experience in the having of one) then the production of dreams, memories, verbal utterances, and other behaviours constitutes the reverse field change. It's like converting the magnetism back to electricity again.

Our dreams, memories, thoughts and behaviours aren't stored in the brain and the re-presented. They are built from scratch again as a result of the functioning of the neural network. A memory isn't the same experience which is had a second time. It is a new experience.

That's why we misremember, have fanciful dreams, see things as we want to see, and all the rest. When we are recreating the phenomenal experience, this recreation is affected by all number of factors, all the other elements of the neural net, the configuration of the pre-existing circuit.

We can draw numerous lessons from this, and I have drawn them in other posts. That we do not remember 'facts', for example. That knowledge is 'grown' through the growing of our neural network, rather than accumulation or construction or any of the theories that do not incorporate a field shift. And the rest, which I won't reiterate here.

Why this is important for the present purposes is that it changes our approach to the sorts of activities postulated to take place in personal knowledge management, the filtering, validation, synthesis and all the rest. Because we now have two points of view from which we can regard these activities:

- from the perspective of the content on which they operate, or

- from the perspective of the person that is doing the operating.

To put the distinction very crassly, we could say that, on the one view, the content constitutes the knowledge, while on the other hand, we could say that the operation constitutes the knowledge. This latter view, which can be classified under the head of operationalist theories of knowledge, is more representative of the inductivist approach.

The paradigm case here is mathematical knowledge. In what does a knowledge of mathematics consist? A typically realist interpretation of mathematics will say something like, "there are such thing as mathematical objects, and there is a set of facts that describes those objects, and mathematical knowledge consists of the acquisition, or at the very least, the internalization, of those facts."

An operationalist interpretation of mathematics, by contrast, remains silent on the question of the existence of mathematical objects, and interprets mathematical knowledge as corresponding (for lack of a better work) to the operations typical of mathematics. The number 'four' is tantemount to an act of counting, "one - two - three - four." The act of addition is tantemou8nt to the act of putting one pile of beans in the same place as another pile of beans, and then counting all of them (a short though critical account of Kitcher-Mill can be found here).

When we place the locus of knowledge, not in the content, but in the person, then the content becomes essentially nothing more than the raw material on which the learning practice will occur. What matters is not the semantical referent of the input data, but rather, the act (or operation) of filtering, validation, synthesis, etc., that takes place on that content.

Whe we say something like "words are things we use to think" we should understand this in the sense of "paint is something we use to imagine" or "sand is something we use to tell time". Time is not in the sand, imagination is not in the paint, and thought is not in the words. These are just raw materials we use to stimulate an inductive process - we can generate a field shift from thought to sand to thought again.

Even when you are explicitly teaching content, and when what appears to be learned is content, since the content itself never persists from the initial presentation of that content to the ultimate reproduction of that content, what you are teaching is not the content. Rather, what you are trying to induce is a neural state such that, when presented with similar phenomena in the future, will present similar output. Understanding that you are train a neural net, rather than store content for reproduction, is key.

Sunday, April 04, 2010

Centennial Park

Andrea and I went for a nice walk in Centennial Park today, where everything was coming back to life. Enjoy our visit.

Friday, April 02, 2010

We Learn

"If we are not careful," warns Michael Feldstein, "open education may actually end up reinforcing economic divides."

He explains, "It's easy for those of us in the open education movement to see our work in opposition to proprietary technology companies, proprietary textbook companies, and the gatekeepers in the university system. But it's not the 'evil' LMS companies, or the 'evil' textbook companies, or the 'evil' administrators and bureaucrats that are failing these students. It is all of us."

Really? Even those working in the edupunk movement - the subject of this post - who are doing everything they can to throw open the gates of learning to all comers? Even the people trying to free learning from the shackles of publishers and vendors that are trying to destroy public education and lock down all learning content? I would like to have it explained to me in what way it is "all of us". What are we not doing?

Feldstein responds, "You seem to have missed the main point of my post. The millions of learners I am talking about will not magically learn just because we make resources freely available.. My point is that edupunk and OER do nothing, in and of themselves, to help these student leap the chasm that they will have to leap in order to further their education. Pretending otherwise is pernicious."

First of all (I write back to him), I did not misunderstand the point of the post. I know that this is what you're saying.

My response is predicated on what seems to be the presumption in your post that, though OER and edupunk do not provide the support these students need, corporations and institutions *are* providing that support.

So my first line of response is to reject that presumption. The reason we have so many students who are utterly unable to learn for themselves is precisely *because* of corporations and institutions.

They are not providing help. They are actively hindering it. It is in their interests to keep students dependent and unable to learn for themselves. They actively act against attempts to provide this support.

The other part of your argument, the part you stress here is the proposition that edupunk and OER do not, by themselves, provide the support students need in order to learn for themselves.

But this is to raise the same point raised by David Wiley and invites the same sort of answer I gave there.

In particular, "So long as we depict open learning as some form of 'independent study, then yeah, it will appeal only to the fifteen percent of people that likes to study.

"But mostly the people behind open education – the technologists, at least – the administrators remain institution-bound – depict it as anything _but_ ‘independent study’. It’s depicted as more like creating art and music and games and other content, activities that engage far more than some elite fifteen percent, and when sufficiently equitable, attracts something more like 85 percent than 15 percent."

If you get beyond the characterization of open education as an alternative *institutional* response, and see it in its much more true light as a set of mechanisms to encourage and allow creativity, engagement, and empowerment, then you locate in edupunk and OER the missing elements.

The problem with depicting edupunk as *only* the provision of free resources is that you ignore the forces and mechanisms put into place to put those resources there in the first place.

And while David Wiley and others talking about more traditional OER (eg. here ) the approach I and the edupunks take is that these resources are produced by the members of the community themselves.

As I said here  "the functions of production and consumption need to be collapsed, that the distinction between producers and consumers need to be collapsed. The use of a learning resource, through adaptation and repurposing, becomes the production of another resource."

Edupunk, and for that matter OER, are not and should not be thought of in the context of the traditional educational model, where students are passive recipients of 'instruction' and 'support' and 'learning resources'. Rather, it is the much more active conception where students are engages in the actual creation of those resources.

Now to return to my original point, this is exactly what corporations and institutions do *not* want edupunks and proponents of OERs to do, and they have expended a great deal of effort to ensure that this does not become the mainstream of learning, to ensure students remain passive and disempowered.

They through fear, uncertainty and doubt into potential supporters by raising the sceptre of copyright infringement, patent challenges, and dangerous content, so much so that material that is not professionally produced are deemed too dangerous to be used in education, and connections to distributed networks of resources are so risky they must be blocked in companies and educational institutions.

They redirect those people who are actually good of heart and want to contribute to OERs toward a model that emphasizes production and publication by institutions, and employ foundations and funding agencies to guide this effort, and perhaps incidentally (though not so far successfully) to flood the market with institutional OERs that would eliminate the need for community produced OERs.

They attempt to co-opt nascent OER initiatives by directing them toward commercial enterprise, arguing that resources must allow commercial licensing, and directing production toward enterprises and initiatives that must receive see funding and draw a return on that investment through the conversion of OERs into commodities.

And they foster a sense of incapacity in opinion and the media to suggest to students themselves that they are incapable of independent action without the comforting support of corporations and institutions, that they are simply not capable of learning form themselves. From the first utterance that "OCW is not an MIT education" the suggestion has been that education must need be a high-priced endeavour, available, really, only to those willing to pay the price.

In fact, what we see on the internet, and especially (albeit constrained) in web 2.0 services, a blossoming of creativity and initiative. Even if this currently represents only a minority of the population (and studies, depending on how you look at them, argue both ways) it seems clear that this is something that has taken hold and is in the process of becoming mainstream.

It is activity and work that is taking place outside educational institutions, and would, if it could (and often does), take place outside the corporate environment.

It is the world of mashups, of deviant art, of self-help discussion groups, of environmental activism and pirates, of self-managed learning, of hobbiests, of hackers, of open source programmers, and on and more and more.

Don't tell me none of this exists.

If you care to say all of this is not providing the support students need, make the point. But I think we cannot start from the presumption that edupunk and OER are doing nothing to support, motivate, scaffold and empower students. Quite the opposite. 

Thursday, April 01, 2010

Penalizing Poor Credit

Responding to David W. Campbell, who defends the practice of using credit scores to determine insurance rates.

The concern is more generally the use of credit scores to evaluate people for things that have nothing to do with credit.

The case of insurance is only the thin edge of the wedge. In the U.S. employers often subject prospective employees to credit checks, and reject those with poor credit.

Applying a means test for the provision of basic needs, such as insurance or jobs, and then structuring the result to discriminate against those most in need, is divisive and dangerous.

It serves to increase, rather than bridge, income disparities in society, and thus propagates the various social ills that result from wide income disparities.

The price of insurance should be the same whether you are rich or poor, and if there is to be a price differentiation, it should most certainly not penalize the poor.

Historically the insurance industry has acted as exactly the opposite of fair and equitable brokers in society.

We have seen this with the health insurance industry in the United States, which has historically refused to insure people living in the wrong location, making the wrong sort of income, or for any of a wide variety of other putative facts that correlate with higher payouts.

We have seen the auto insurance industry in New Brunswick exact significantly higher premiums here than in other provinces not on the basis of any difference in payouts but because the limited competition in a poorer market makes it possible to charge higher premiums. Private industry charges what the market will bear, and poorer regions bear higher prices as a result of lower competition for mandatory insurance.

And while the Cooperators may be better than most insurance companies, on the premium (rather than the ownership) side of things, they function by the same principles as private insurance, which means premiums are based not as much on potential payout but rather on what the market will bear.

Credit checks do not report an insurance-related property of the individual, but the use of credit checks reduces the number of opportunities for people with lower scores to obtain insurance, and this forces up their premiums. It puts people with lower credit scores in a take-it-or-leave-it position.

The tactic is akin to forcing poor people to go to the most expensive doctor in town, because he's the only one who will treat them.

Or forcing (as actually happens) welfare recipients into the most expensive apartments in town, because they are the only ones that will accept welfare recipients.

It is wrong, and should be prohibited.

Surveys Are Not Connective Knowledge

Responding to Steve Covello, who asserts "the collective opinion based on crowdsourced data collection means nothing more than a statistical point of interest... In a “data happy” world, we are inclined to reflexively respond to patterns and trends in information – the so-called emergence phenomenon mentioned by Stephen Downes and Connectivists in general – rather than the inherent validity of the basis for the data trends."

Gross mass-based phenomena such as yes-no votes are not emergent phenomena and are not what is meant by 'collective intelligence'.

That would be like attempting to analyze the meaning of a set of pixels by counting how many are 'off' versus 'on', instead of looking at the organization and recognizing in that a picture of Richard Nixon.

The fruit of collective intelligence, which I (and others) have described as an emergent phenomenon, results from the linkages and connections between individuals, and not a counting of properties (such as survey results) of those individuals.

This emergent knowledge is not intended to compete with, or replace, qualitative or quantitative knowledge. The assessment of whether Obama is a Muslim is not the subject of collective intelligence, no more than the assessment how many children he has would be based on what colour jacket he is wearing. Just as we should not confuse qualitative and quantitative data, we should not confuse wither of those with data describing connections and relations.

As to whether observation of emergent phenomena based on linkages or relations is based on "inherent validity", or "objective measure, evidence of intellectual virtue, rational thinking, or consideration of viable alternatives", depends on "reliability and validity of information", and demonstrates "smart, correct, educated, having wisdom, having valid experience in an area of knowledge or skill", such data - just like assessments of quality or quantity - are and ought to be subject to assessments of reliability, and not accepted as fact uncritically.

Just as nobody would accept a claim like "Obama is purple" or "Obama is really two people" uncritically, and without corroboration or verification, nor either should we uncritically accept statement like "Obama is a Muslim" or even "this arrangement of pixels depicts Richard Nixon" uncritically, without corroboration or verification.

The idea of emergent properties, or collective intelligence, or (as I would call it) connective knowledge, is not inherently opposed even to the strong realism assumed in the assessment above. It is not inconsistent to assert that "there are facts of the matter" and "these facts are expressed as connective knowledge".

The point of an assertion that there is  _is_ connective knowledge is to assert that "this domain of facts is not exhausted by observing qualities and counting entities or their properties; there is a distinct set of facts represented by the *connections* between these entities." This is a proposition, even when granting the naive sort of realism assumed above, that is difficult to refute, and is not refuted by assertions such as "a large quantity of people express the belief that Obama is Muslim."

If we wanted to learn about Obama's religion - which is not a simple observable or countable property - then we would not sample what people unconnected to him express as beliefs. That's like determining the colour of grass by counting pebbles on the beach. Rather, we would amass and collect the set of Obamas *connections* and *interactions* with other people and things, and determine whether this constitutes a set of patterns that more typically resembles a person we typically call a "Muslim".

Does Obama go to Muslim assemblies, such as Mosques, or does he typically assemble with and interact with Christians? Does he regularly consult Islamic texts, or would his readings be more typical of work read by Christians? Can connections in his thought be drawn to Islamic Law, or does an analysis of his texts demonstrate a stronger affinity with Christian thought? Do the utterances and texts of people connected to Obama describe him in terms typical of those describing Muslims, or do they tend to connect him to terms typical of those describing Christians?

Asserting that "Obama is a Muslim" based on a poll would be irresponsible, and no person advocating any form of collective intelligence or connective knowledge would assert otherwise.

But asserting that there is some simple observable property that verifies or confirms that "Obama is not a Muslim" is equally irresponsible. Naive realism does not refute connective knowledge when the reality being described is complex, when there is no simple observable or countable fact of the matter.

Connective knowledge, in other words, does not refute or overturn existing knowledge; rather, it offers us a *new* type of knowledge, that *cannot* be confirmed or refuted by simple observation of data; the employment of connective knowledge *is* to assess and evaluate such assertions *is* a demonstration of being "smart, correct, educated, having wisdom, having valid experience in an area of knowledge or skill".

Update: Steve Covello has responded with detailed commentary.