Entries in 'Information filtering' ↓

Is Seth Godin Polluting A Powerful Space – Or Building A Tribe?

Seth Godin writes on the trap of social media :

If we put a number on it, people will try to make the number go up.

Now that everyone is a marketer, many people are looking for a louder megaphone, a chance to talk about their work, their career, their product… and social media looks like the ideal soapbox, a free opportunity to shout to the masses.

But first, we’re told to make that number go up. Increase the number of fans, friends and followers, so your shouts will be heard. (…)

This looks like winning (the numbers are going up!), but it’s actually a double-edged form of losing. First, you’re polluting a powerful space, turning signals into noise and bringing down the level of discourse for everyone. And second, you’re wasting your time when you could be building a tribe instead, could be earning permission, could be creating a channel where your voice is actually welcomed.

Leadership (even idea leadership) scares many people, because it requires you to own your words, to do work that matters. The alternative is to be a junk dealer.

The game theory pushes us into one of two directions: either be better at pump and dump than anyone else, get your numbers into the millions, outmass those that choose to use mass and always dance at the edge of spam (in which the number of those you offend or turn off forever keep increasing), or

Relentlessly focus. Prune your message and your list and build a reputation that’s worth owning and an audience that cares.

So, what I wondered when reading this, is this : is Seth Godin himself “polluting a powerful space” – or “building a tribe”? Who are cases of one or the other? What category does a Guy Kawasaki or Robert Scoble fall into? What case was Barack Obama’s use of social media? What case is Seth Godin?

I would have liked to ask Seth this on his blog, but his blog doesn’t allow comments, which I find smart, because it provokes me to write an entire blog post instead while I link back to his article, but somewhat paradoxical for someone who wants to build context, which is what I presume Seth Godin wants – and not as smart as allowing comments, which shows the ability and capability to listen as well as to “shout”.

It’s not that I am in complete disagreement with what Seth writes in the above, but I believe there is a bit more to the case. There is a thin thin line between ‘polluting a powerful space’ and ‘relentlessly focus’. Largely they depend on who is on the receiving end, what they expect from you and what they are looking for. And I do not see that they are so easily identifiable, except you know when there is a signal, and when there is noise. Thankfully, we all have the power to turn off noise and filter our incoming information streams ourselves. Increasingly, we rely less on the editorial filters of others, although we do rely a lot (too much, IMHO) on the information architectures built by others (especially when using social networks such as Facebook or Google+). The web that you describe and hope for, where we can focus relentlessly, is the same I want – but right now numbers are rewarded, and numbers are what makes the web (or large parts of it) a frantic race for PageRank, clicks and impressions. SEO, linkspam, noise, waste of time and waste of eyeballs. Lots of it comes from design faults of this space, which we can work to eradicate and improve upon. More so, than from the noise of any particular noisy individual ‘polluting’ our much-agreed-upon intensely ‘powerful space’.

flattr this!

Tags : , , , ,  

The Follower Slot Syndrome

The social messaging service Twitter, which has been called the Swiss Army Knife of online communications, has seen a few changes under the hood since the inception of the service. Among these are the hardcoded follow rule which was introduced on Twitter in late 2008. Those most hit by this rule are users who are unaware of the limits and are very generous with their follows. When I myself bumped into that limit, it gave rise to thoughts about how I use this service, what kind of value it has and how I need to follow and unfollow others.

Important stuff? No, not really. But then again it touches on some pretty important things, like our ability to speak and be heard via the online architectures we use. And that warrants some lengthy attention IMHO. I hope some influencers and “high profile” Twitter users will take note, reconsider their stand and build up their capacity to deal with larger information intakes.

The follow rule

Chances are you won’t have bumped into this limit if you’re new to Twitter, but if you follow many people, and specifically if you follow more users than follow you back (typically celebrities or other high-profile influencers), you’ll likely bump into it when you hit 2000 follows.

Before Twitter introduced this rule, following was free game. Everyone could have as many or as few followers and follows as they liked. Everything was open and one could be generous with one’s attention without fearing that one would “run out” of slots. This changed dramatically with this rule.

The basic rule is this : you can follow only +10% in excess of your number of followers after you hit 2000 follows.

Basically, if you’re followed by 2000 users, you can follow 2200 yourself. If you’re followed by 10.000 you can follow 11.000 yourself. This rule, while good-intended, has some bizarre effects when you take a closer look at it. Among other things, it raises significantly the value of the commodity on Twitter known as a follow (i.e. attention), and even more that of a mutual follow (mutual attention), i.e. someone who follows you where you follow that someone back too.

Background : “bait-following”

This rule was introduced to combat “bait-following”. This is also sometimes referred to as Twitter “spam”, but I don’t acknowledge there is such a thing as “spam” on a service like Twitter. Many users have experienced this particular obnoxious phenomenon. Some users, either by themselves or using tools which utilize the Twitter API will track you down depending on your profile description or keywords in your tweet track record and follow you. What makes them different from users who genuinely want to connect with you, is that they do this en masse, following thousands of users, in the hope that some percentage will follow back. Hence the baiting. Those who do follow back can then be exposed to the advertising or other spam-like messages such as affiliate links to products you’re really not interested in or links to services which “help” you get “more followers”.

In Twitter’s early days, it wasn’t uncommon to browse around and follow other users somewhat randomly and sometimes stumble over interesting profiles and make new genuine connections. But automated tools made it considerably easier to “exploit” the fact that most Twitter users were generally willing to follow back others who were interested in connecting with them (and maybe still are, to a large degree).

These tools and the users who employ them (I’ve experimented with some myself at one time) use Twitter as a broadcast platform. It is the same logic applied to the online medium as is daily applied to television. It doesn’t matter if you waste 99% of your audience’s time, if you can sell something to the remaining 1%. That may be enough to make it worth it. Trouble is, the 99% still think it is a waste of their time, and therefore using methods like these to “increase following” is doomed to dry out sooner or later, as most will quickly see through the scams and unfollow such scammy attempts at gaining some attention.

After the hardcoded follow rules, scammers must now unfollow all those users who don’t follow back (but this is comparatively easy with automated tools), but then they are free to repeat the stunt. In other words, this particular type of use of Twitter persists. It’s still very common, and there’s very little the hardcoded rules can do to prevent it, because basically Twitter is a very open platform which grants access to it’s data to a wide host of third-party tools (which, among other things, make it great).

Twitter misconceptions

I wouldn’t care about follow or follower numbers so much as I do here, but because I feel Twitter nurtures some misconceptions about their own tool, which will make it less valuable, to me and other users – and ultimately to Twitter too.

First, as I stated, it is hard for me to accept that the crude misuse of Twitter described above is spam. For anyone to deliver a message to someone on Twitter that someone has to follow that anyone first. So messages on Twitter are always solicited. That you have been tricked into soliciting the messages doesn’t make such messages unsolicited.

In my humble opinion, Twitter should have kept their service pure. They should have butted out. They shouldn’t have started becoming involved with determining what kind of interactions took place using their service. Twitter would have survived fine, in spite of the crude attempts to undermine it’s usefulness. They should have worked to ensure it stayed a strong platform, which could make it as reliable as a phone line, but way more powerful. Twitter is a strong versatile platform and people used it very creatively on their own, blocking users they didn’t like and following those they did like. It was brilliant.

But they did. Twitter as a company couldn’t just look quiet at the many paths it was conceived scammers went to undermine their service. Fear started to kick in, and demands came from some users that Twitter needed to regulate and filter conversations and connections. They started abolishing user accounts whose following behaviour patterns made them suspicious. And they introduced hard coded rules, with the aim to stifle that particular kind of baiting spam as described above.

Twitter has a perception of it’s own service as a stream of information, which has to be managed. Noone can manage an intake of more than 2000 followers. At least not without losing out on many messages. So the argument for such hardcoded rules goes. However, this perception is wrongheaded as an attempt to figure out how Twitter data is used. The truth is Twitter has no idea whatsoever about what creative ways users may take in the data in their streams. One user taking in a lot of information may analyze it with a piece of software Twitter knows nothing about. Another may write a tool which filters the incoming stream according to criteria Twitter wouldn’t ever understand. Fundamentally Twitter is optimized for filtering at the receiving end, as the information intake will almost always be much much larger than the outgoing information stream. What we need in other words, is not better ways to restrict access, i.e. hardcoded limits on the posting end of the information loop – but better filters at the receiving end.

Filtering the information intake

I’ve often had Facebook friends complain about the massive stream of messages from me coming their way, when I send my tweets via Ping.fm in that direction. True, some nerdy stuff in there which they could care less about, but I want to include them, not exclude them from my information circuits, that’s why I send it their way. If I know the precise recipient of a message, I will usually send a direct message or email to that person. But more often than not, there’s no direct recipient but the aim to strike a chord or strike up conversation and input, like when I write a blog post. Social messaging is sometimes referred to as microblogging, and that is perhaps a very accurate description of the way I use Twitter. I send it their way, because I hope some of it may create new connections, from where the conversation may rise. I may discover new things about my friends doing this, because I never quite know who possesses the information I seek or share my interests and concerns.

Increasingly, as recipients of large information flows, their job then is to learn how to filter what they take in (if they do not choose to block me or unfriend me because I am “too loud”). We all need to do this. We all need to learn how to filter out incoming streams, i.e. prioritize what is more important than something else. What we need to read before something else. What emails to reply to first. Etc. Increasingly, we also need to learn to code and use aggregation tools on our own as well as free licensing, if we want to be independent of the filters offered us by proprietary service providers.

A large information intake or large information stream may be overwhelming, but it has nothing to do with spam. Spam is unsolicited messages sent to a lot of people in the hope that a small percentage responds and buys something. Information streams can be managed, filtered, analyzed, put from one form into another form.

The hard-coded follow rule imposes a limit in the wrong end. To get the best possible dataset, you don’t limit the intake, you take steps to make it easier to process the intake, to make it easier to get the desired data out. Twitter has no real idea if their users have need for a small or large intake of information for their data needs. But this is not the only place where Twitter don’t _get_ Twitter. I’ve often come back to how Twitter displays a huge failure to understand the value of their own data, when they don’t allow access to the full archives of tweets. You can go back only to what corresponds to three months worth of tweets. This means that all this data cannot be retrieved, filtered, analyzed and brought to use by clever people who want to know something about social behaviour patterns, particular brands, viral effects and all other things thinkable and mentionable. Twitter has a pre-conceived notion of what Twitter is, and if users don’t use Twitter that way, they are wrong and must be corrected with hard-coded rules to use Twitter as Twitter was intended. But the truth is the versatility of Twitter has made it much larger than itself – it has outgrown it’s initial purposes by milelengths. If Twitter doesn’t get that (and the true value they can offer as a business), they risk running their service into the ground, because they don’t make it profitable.

Following back

Now, I recently provoked some debate and diagreement among some of my followers when I provocatively asked why they didn’t follow me back. Actually, the message was not really aimed at those who do follow me, but at those who don’t. By those I mean the wide host of celebrities and influencers which are known to have a large following on Twitter, but only follow a small host of people themselves. I follow a wide host of them, but hitting the 2000-follow limit forced me to re-consider a lot of them. In fact, I unfollowed at least 800 users who didn’t follow me back, in order to allow me to follow others, who do follow me.

When someone follows me and I feel they are real people who are interested in what I have to say, I usually want to follow them back. Not only as a token of courtesy and respect, but because I feel strange when talking to someone and I have no idea what they are like. I want that influx of ideas from others and I honestly don’t care so much if I manage to read _everything_ but it’s there and I can take that data, do a search, create a filtered feed and other things if I want to, when I want to. You can too, if you want to, and if you want to learn how to do it.

What stopped me from following others back? The 2000-limit and the many many users that I followed, who didn’t care to follow back. It says I can only have 200 “non-follower” users I follow, if I want to follow everyone back who follows me (and I usually do). So what provoked me is that while high profile Twitter users such as Barack Obama, Scobleizer and Guy Kawasaki follows me back, why can’t others? If they can, why can’t you?

To me not following someone back is a message saying “I don’t care what you have to say” or “You’re less important than me”. Less worthy of attention. I’m worthy enough to be in your stream, but you can’t be in mine. That is the wrong message to send out, no matter what you want to communicate using Twitter, it’s a bad way to start a conversation with anyone. Don’t get me wrong, there’s nothing wrong with being very selective about who you follow, but if you overdo it, you also risk coming off as arrogant and disrespectful, because you do not take part in the exchanges on an equal footing.

I don’t care if Obama, Guy Kawasaki or Scobles actually reads what I have to say. I care about the gesture. I care about them saying with that gesture that if you give me attention, I will give you mine back. Even if it is not true. They will not occupy one of the most valuable rare 200 slots I can allocate to only information intake. These may be reserved for others, typically high-profile users whose opinion and information is so important to me, that I don’t care if they listen to what I have to say. As a company or as most people using Twitter, you don’t want to bet on yourself being in that category. You should follow back. Why reach out (have a Twitter account) and then don’t want to listen to what people have to say? Indeed, what those few who’s already decided they want to give you their attention, have to say (if anything).

I don’t consider myself an atypical Twitter user. There are many bloggers, companies, organizations and other users who use Twitter because they have a message they want out. We want to reach other people, make connections with others who are interested in what we have to say and offer. But I just unfollowed a lot of startups and internet professionals, who didn’t take the time, were too disinterested or too lazy to follow me back. They lost what tiny piece of my attention they had. They didn’t need to. With a small gesture, they’d still be in. Would it matter? I don’t know. Nobody knows. But they’d have given a small but important gesture, which doesn’t cost them much but may – just may – give them something of value back some day.

If attention matters to you, i.e. it matters that you reach someone out there with whom your message resonates, you can’t afford to throw away the tiny bits of attention you’re afforded when you’re afforded them.

flattr this!

Tags : , , ,  

Google as in “Massive Copyright Infringement”

Torrent index sites like The Pirate Bay are often compared to search engines such as Google in that both offer vast indexes of information, and both give easy access to unauthorized copies of copyrighted material.

One thing which surfaced during the Pirate Bay trial in late February was IFPI’s cooperation with Google and other search services in their battles against copyright infringement. When IFPI’s representative John Kennedy was asked why they sued The Pirate Bay and not Google (as in “or any other major information filtering service using the internet”), the answer was that Google cooperated, and The Pirate Bay didn’t :

When asked about the differences between TPB and Google, Kennedy said there is no comparison. “We talk to Google all the time about preventing piracy. If you go to Google and type in Coldplay you get 40 million results – press stories, legal Coldplay music, review, appraisals of concerts/records. If you go to Pirate Bay you will get less than 1000 results, all of which give you access to illegal music or videos. Unfortunately The Pirate Bay does what it says in its description and its main aim is to make available unauthorized material. It filters fake material, it authorizes, it induces.”

(…) Kennedy was asked why they haven’t sued Google the same way as TPB. He said that Google said they would partner IFPI in fighting piracy and he has a team of 10 people working with Google every day, and if Google hadn’t announced they were a partner, IFPI would have sued them too.

I think the truth of the matter is, that Google’s business is based on copyright infringement from the start. When Brin and Page started Google, they started by downloading the entire internet and offering their index of it online. In the words of Larry Page himself, in David Vise’s The Google Story :

Google was started when Sergey and I were Ph.D. students at Stanford University in computer science, and we didn’t know exactly what we wanted to do. I got this crazy idea that I was going to download the entire Web onto my computer. I told my advisor that it would only take a week. After about a year or so, I had some portion of it.

In order to offer Google’s search of their index to the world, they had to keep all the internet’s content on their own servers, otherwise their results wouldn’t be very fast. Did they ask every single website owner or administrator for permission to use said material? No. Did they need to? No, in fact they couldn’t. That would have been prohibitive for what they were doing. The cost alone of asking would have been prohibitive for what Google was doing, if they even knew themselves, what they were doing.

However, was what they did beneficial to the world? Yes, one may very well say so, to a degree that Google is now a hugely successful business whose operations span the globe and benefit millions, if not billions of people on a daily business. What Google did was transformative, defining of the internet. It defined the web.

What Google added was their filtering index of the web. On their servers, the content of sites were analyzed and ranked according to PageRank, an algorithm which rewards sites which are greatly linked to with a better placement in search results than sites which have generated fewer links.

But for this to work they needed the data to work with. Google has done a lot to give users the impression, than when one is using their core product (search), it appears like one has instant access to all of the World Wide Web. This is a brilliant illusion, but no matter how good it is, one is still only surfing around on Google’s own servers, which store Terabyte after Terabyte of unauthorized copies of copyrighted material. The fact remains, that Google took this data, without asking anyone for permission. Perhaps they didn’t need to, perhaps they didn’t deem it necessary. What Google did was one of the greatest things that could have happened to the web at the time, and what everyone else involved in the search industry was doing. Throwing around data without paying any kind of homage to copyright owners. To the great benefit of everyone of us today, most will say.

What The Pirate Bay and other sites are doing today – is no less transformative. But they’re not cooperating.

What happened since Google introduced their filters to the world was that the “war on piracy” became greatly intensified. Napster and peer-to-peer networks threatened the monopolies of first the record industry, since the Hollywood-based entertainment industry. Google and other services which offer online metadata – i.e. access to “other people’s” information via the internet, got trapped in that battle. Some felt they had to choose sides. And most chose to cooperate with the entertainment industries – over what was right or true or just. Whether this line of business was born out of the pragmatism of doing “business” and avoid expensive law suits or out of a mission to “do no evil” doesn’t matter. Google and likeminded companies will do a lot to cover up the fact that what they are doing is based on massive copyright infringement – including cooperating with IFPI to filter online information – every day. Which in my humble opinion is very creepy.

I say this as a big fan of Google, as a daily user of countless Google products, which I would hate to live without.

It’s a pretty good fraud. Cooperate with IFPI and other copyright holders to only slightly cover up the fact, that the whole thing is based on copying other people’s material. Blur the distinctions to the extent that it even confuses the courts as to what they should believe. What is really the difference between Google and similar search filters and a service such as The Pirate Bay? Both store and provide access to metadata. But while the first stores everything on their own servers, from where they provide access to local sites and material – The Pirate Bay and others employ a superior technology, which offers nothing but hyperlinks directly to material stored on their users’ own machines. So why should The Pirate Bay lose the case which is going on right now in Sweden? Because they do not cooperate. They do not care about anyone’s material. What they’re interested in is developing a new technology to the benefit of all of us. They do what Google did in 1998, except they do not commit any copyright infringement at all.

On a curious note, Google also ranks web sites according to how “unique” their contents are. This means, that if you run an aggregation site, i.e. a site which harvests and provides access to the content of other web sites – just like Google did, and still do – Google assigns you penalty points, and your site will be harder to find using Google’s search. Your site will rank lower, if you do what Google does : copy the content of other websites.

What’s really scary however is the degree to which we rely on proprietary filtering services such as Google’s search, which are influenced by interests we don’t know about. Google presents itself as an almost universally neutral service, which can give us an instant answer to almost every problem we face. The truth is, Google is in fact a highly weighted information filtering service, which is influenced by the special interests of organizations such as IFPI, on no legal grounds except what pleases and what not pleases Google and is completely dependant on their choice to cooperate. We don’t know what other special interests Google chooses to cooperate with, and we have absolutely nothing to say as to whether they do and how they let their search results be influenced by them. I can only conclude, that while a few young people in Sweden are willing to stand up for our freedom of speech (for this is what I consider the “freedom to link” to be) – it is shameful to realize again and again, that the world’s information filtering superpower is not.

In my view there is no other way out of this misery than to create and help build new sets of truly de-centralized information filtering tools and services, which are based on free software, which cannot be influenced, manipulated or dominated by any particular third party. Tools which enable better, faster and more precise connections between someone who wants a message or query out – and those who wish to receive and answer it. We’re still throwing around rocks in our information stone age when playing with proprietary services and tools such as Twitter, YouTube and the many many others we use on a daily basis.

flattr this!

Tags : , , , , , ,  

The Bumpy Rolling Out of Kaplak Stream – And What Not To Do To Piss Off Google

Kaplak is changing it’s course again. Since the inception of the first kaplak idea, we’ve come a long humbling way to only realize over and over again, how much we still have to learn. But slowly, we also realize what kind of knowhow we have and are building, and how Kaplak can help crack the problems and meet the challenges, which we set out to originally. Hence we also begin to understand what kind of value we add – and just as importantly, what we don’t add. Among many other things, this is key to learn what kind of business model we want to build – and, just as importantly, what kind of business we don’t want.

Let’s take a look at what happened with our traffic since the somewhat bumpy rolling-out of Kaplak Stream in 2008, from November 1st last year to February 1st this year :

The above is a screenshot from the Google Analytics Dashboard for Kaplak.com including subdomains. Following the launch of Kaplak Stream, sometime in November our traffic started to take off. Kaplak Stream basically consists of the present WordPress MU installation of which the Kaplak Blog is also part, along with a handful of customized plugins, of which the most important one is FeedWordPress. The idea (as sketched out in this previous blog post) is that items in the stream can be “fed out” from the stream again, which will reveal new contexts, which didn’t exist before. When two separate items which are both tagged “Barack Obama” are fed from the stream, they create a new “Barack Obama” context, even though the original items may have been produced and published in wildly different contexts.

The first installment of Kaplak Stream came with just about fifteen feeds, of which a handful were submitted by owners of niche websites. Others were feeds from sites such as YouTube, Amazon.com, Twitter (tracking particular subjects or keywords) and Boing Boing. Enough to provide the stream with some variety and “head” which would also test the autotagging performed by Open Calais via a modified version of Dan Grossman’s WordPress plugin.

Kaplak Stream managed to aggregate well over about 15.000 items, i.e. about 1000 items from each feed on average. Grossly more tweets than regular blog posts were aggregated, but posts attracted the greater amount of traffic, given that they worked much better with the autotagging functionality in place. Since they had more text, the tagging tended to be more precise – although some times tags were wildly misleading and out of place. Room for lots of improvement. Most, about 90-95% of all traffic came from search, notably Google. Visitors tended to not stay long, but quickly be on their way again. This could seem to suggest that only few found what they were looking for. However, reports also came in from feed owners, that our traffic managed to produce a meaningful sample of visits on the actual sites aggregated. This was really good news, as it suggests that a sample of our visitors actually found what they were looking for, or was curious enough to click through.

So what pulled the rise in traffic? No subject in particular, but the variety of subjects covered. What attracted users were more often than not pretty obscure pages and topics. For example, top result were the “tag page” for the tag “university-of-illinois-arctic-climate-research-center” with 641 views, and there was absolutely no recoginzable pattern in the rest of the more popular pages reached by visitors. I have not given our sample here substantial analysis, but my guess would be that there would be a neat power law graph, if one dotted in the number of visits to each page in Kaplak Stream and ranked them besides each other. But there is no discernable pattern as to what determined what aggregated items were more popular than others.

While some things seem to work, albeit still just barely, there are also problems. One of these is that apparently something happened on January 26th, which made our traffic drop drastically to before Kaplak Stream levels. Presumably this drop was caused by a Google penalty from duplicate content, which Google have been known to give websites which carry identical content across different domains. While Kaplak’s goals are somewhat aligned with Google’s, although not completely, I’m not unsure the penalty (if there was one) was not “right” in the sense that there were clearly limits to how informative and appropriate the search results which led visitors to our site, were. At least to justify the dramatically beneficial position we gained by aggregating just 15 feeds.

Another problem is the “noise” level, in our tagging, and in the combinations of feed items tagged with similar tags. Tags can be and mostly are very local. A post only remotely connected with a person and a piece which is solely about that person are usually tagged identically. My instinct tells me we need to use automated tools for what they are good for, and let filtering be more in the hands of expert users, in the contexts where it matters.

Clearly, more experiments are needed, and we need much more sustained analysis and methods to analyze our data. All this takes time and costs money. Right now Kaplak has no business model except what we can put into it of our own pockets (meaning mine) – and these are rapidly emptied. This means, for the time being, i.e. for several months now – and several months (and perhaps even years) ahead, I will not be able to work and develop Kaplak on full time. Thanks to the benevolence of our host, we can keep and continue to work on all Kaplak’s sites and projects, but we’ll make some changes which prepares us best to run Kaplak as a part-time operation.

We’ll convert the Kaplak setup to a setup more similar to that of the UMW Edublogs set up by Jim Groom at the University of Mary Washington. Among other things, this means we’ll focus more on building each smaller site in the network, and keep each site focused on it’s subject or theme. We’ll focus more on aggregating what happens within the Kaplak network of sites than what is going on outside the Kaplak WPMU install. We’ll still use aggregation tools to track very particular subjects, keywords and tags, but each different subject will be treated in a site of it’s own, to make things more manageable (it’s a mess cleaning up a large site based on aggregated items). In other words, we’ll run a network of small, very low-maintenance sites, and delay bigger experiments and improvements for a while. Meanwhile, Kaplak Stream will still be able to track tags across all sites and offer feeds from particular tags used in the network.

Reducing the amount of my time which goes into actual development of Kaplak also means I can focus better at building a new constellation of ressourceful people and (real) investors, which we will need to come back stronger with a revived Kaplak at a later time. This is what I hope to achieve, while I work simultanously on other things, making a living.

However, there is also a risk, that we don’t. That our ways may go in other directions. This is not necessarily all bad. See this video with Tim O’Reilly in a previous post to see why. I will try very hard to keep an open mind and attitude and not get stuck in ideas I ought better to leave behind. That said, I can’t see any companies or services which presently really cracks the problems we set out to – and this means we still need to fill that space, one way or the other. And more than anything, I can’t stay away.

flattr this!

Tags : , , , ,  

When Words Are Not Enough



The web is awash with shocking images – terrible, shocking images of dead children. Of what is happening in Gaza, right now. I don’t care about the political mumbo-jumbo, it doesn’t interest me. But I do care about what other people are doing to each other. What crimes can be committed when people sign off their responsibilities towards their fellow men and replace it with loyalty and servitude to false concepts, institutions and leaders, which cowardly hide behind the rhetorics of concepts and words.

A friend sent me these pictures on Facebook – similar pictures can be found all over the web. Here, here or here. Here. Or simply here.

There were a few which spoke to me deeply. A father (I assume) carrying his dead child away. A kid with his head just above the ground. Corpses of burned children. I am a father myself. It doesn’t take much empathy to understand what kind of unspeakable atrocity is committed here. I have little to say in words except it makes me sad – and furious at the same time. I blip’ed about it here, and that’s just one insufficient way to express how I feel. Words are insufficient.

I try to avoid watching the news. I don’t really like to be spun into the web of politics and juggling of concepts which is what’s going on in television-made reality. I like the internet, where I can obtain the information I need when and where I like. A friend can always share news with me, in many different ways, if he or she deems it important for me to know. Or I can stumble upon things, I wouldn’t otherwise know about. I can be reached.

Today, these images made me think about how images like these can now reach us in a way they couldn’t a mere 10-15 years ago. They’d never make it past the editorial room of the television news, never make it into prime time tv (for good reasons). But they tell the unmasked truth of whats going on. Killed children, dead babies, smashed families… what goes on in every war, no matter how pretty or political it looks on tv. And what needs to reach us and anyone else with influence and just the slightest sense of responsibility. This can’t go on in the 21st century.

I can’t help the feeling that all my work and interests are shallow, when faced with these atrocities. This goes for my work in Kaplak, as well as my hobbies, such as playing strategy games and developing computer game scenarios.

That’s until I remind myself, that the reason I do what I do, is to facilitate this kind of exchange of information. I am reminded of Clay Shirky’s ideas, of what creates a group, and what makes group action possible : shared information, and a platform for interaction. That we develop technological architectures, which enable the decentralized access to and distribution of information, which may operate fast, can easily be used and adapted, and which enable mutual connections between otherwise disconnected entities. Now we have wikis, the blogosphere and we have Twitter. But we need even better tools to facilitate these exchanges of information, and in order to coordinate advanced and complex operations between peers. This is what we do. This is what we’re taking our first few digs into.

I recently blip’ed about the German patriotic song Die Wacht am Rhein, a song which has roots in the Prussian expansion wars of Bismarck 1864-1871, which was also immensely popular in Germany during the two world wars. I want to create a Civilization II scenario on Bismarck’s wars and this forging of the German national state. As a way to explore this our in many ways most recent history, on the birth of the modern European national state – on the iron, technology and blood spilt in this process. The kind of history taking place right now in Gaza is not new. These kinds of atrocities are not new. But gone are the days of romanticizing war and dressing it up as patriotism. Gone are the days when images such as these could be kept away from the public eye. And come are the days, when atrocities in one distant corner of the globe can reach the rest of the globe with the speed of fiery lightning. And hopefully, it will make an act such as this much harder to enact without the world acting against it. If we don’t, it could be our children, there, dead in the ruins. In a way it is.

flattr this!

Tags : , , , , , , , , , , , ,  

Yet Another Sweet Little Autoblogger

Aggregation tools such as WP-o-Matic and FeedWordPress just got a promising little brother, and I’m currently playing a little with it in the Kaplak Labs. The name of this nice little plugin for WordPress is Yet Another Autoblogger or YAAB in short. It is developed by Satheesh Kumar, who was kind enough to post a note on the blog about it just recently:

I too have made a similar but better plugin called YAAB-Autoblogger. Yaab has all features of wp-o-matic and in addition it can create automatic blog carnivals in your site. Also it supports SMS blogging and Youtube cloning. Ebay product syndication and automated content rewriting are upcoming features. After all I myself is a doctor ( not a programmer ). I started making this plugin for my personal use, but when I doveloped it, it was highly impressing and I have planned to release it for public. Kindly download it from http://www.psypo.com/yaab , try it and if possible please review it in your valuable blog

I have only just played around with this plugin a little, but it looks fairly promising. Here are my initial comments and feedback for further improvement (which I also posted on Satheesh’s blog) :

  • I can’t get YAAB to fetch multiple posts in separate posts, like FWP or WP-o-Matic does. It fetches only the latest post or saves the complete feed into a single post, no matter what values I provide it with. I’m sure this is easily fixed or explained.
  • YAAB is very userfriendly and has an almost cartoony tutorial-like quality. I like the little character who helps guide setting up a feed for aggregation. Neat stuff, but it makes me wonder how flexible the plugin will be for more “unusual” type feeds.
  • I also like the template very much. It’s very similar to what Guillermo did in WP-o-Matic, and I liked it there too :-)
  • However, there are no variables for author, date posted, permalinks back to the source, or other data included in the feeds. Would be nice to be able to extract all the information in the feed, and place it where I want in the post. Also would be nice to have a regex like functionality to replace terms or code in a feed item, like the one used in WP-o-Matic. But especially the author and source/permalink information is crucial, IMHO.
  • There are no functionality for tagging incoming posts, or fetching the tags included in the feed. Also a bit crucial in my book.
  • YAAB has some very promising YouTube feeds functionality which makes it easy to set up an autoblog with automatically embedded YouTube videos. I haven’t played with it yet – but I will :-)

As previously stated, I have absolutely no idea how flexible this plugin is yet when it comes to feeds from Twitter Search and other such weird Atom sources. But as this is the first version, I’ll worry about that later :-) Keep up the good work, Satheesh!

flattr this!

Tags : , , , , , , , , , , , ,  

FeedWordPress Extensive Update

FeedWordPress has received an extensive update. Latest version of November 5th 2008 (including subsequent interface bugfixes) is available here.

Great to see this round of improvements! Some of the most important features seem to be support for tags and formatting filters. The plugin has also been removed from beta status and supports all the latest versions of WordPress (2.5 and 2.6).

Find our earlier review of FeedWordPress and WP-o-Matic here.

flattr this!

Tags : , , , ,  

Aggregation Tools For WordPress: The Pros And Cons of FeedWordPress and WP-o-Matic

We’re in the process of setting up our Planet-like website Kaplak Stream. I’ve done some extensive reading and testing of the two most prominent aggregation plugins for WordPress and WordPress MU : Guillermo Rauch’s WP-o-Matic plugin and FeedWordPress by Charles Johnson (aka RadGeek) of Feminist Blogs. This article will examine the pros and cons of both these plugins, in their present state.

Both aggregation tools are open source and distributed under a GPL license, which means that anyone may adjust the workings of these plugins and re-publish their version. They are each however developed and pioneered by one developer only, and rely heavily on the committment of their developers.

WP-o-Matic

WP-o-Matic is developed by 16-years old Argentinian wunderkid Guillermo Rauch, who has done a remarkable job. Schedules are very easy to organize. They are called campaigns, and each campaign can fetch as many feeds as you like. Campaigns are executed by cron, which runs on the server and executes the fetching script at specified intervals. If you can’t get cron from your web host, the WP-o-Matic script can be executed by Webcron. Webcron has been a free online service until recently. Now, the service must be paid for, however (at a very low price, one may add).

Pros

  • Wonderfully flexible customization options of each campaign, directly accessible from a brilliantly designed WP admin interface: specified expressions or URL’s can be transformed, and additional custom text or code added to each post in the campaign (such as ads). Great stuff.
  • Uses cronjobs for executing the script, which should provide the greatest reliability, if you can get it.

Cons

  • Doesn’t use timestamp of fed posts, if they are older than the time window set for the campaign. I.e. if a post is months old and you’ve set your campaign to fetch every hour, posts will be timestamped with the time of feeding it, rather than the original timestamp. This sometimes means older posts are published in the wrong or opposite order of the feed, which messes up the chronology of a blog. This, combined with the bugs which makes it difficult to re-run fetches without completely removing the campaign, makes correcting the timestamps a very tedious affair. If timestamps are important to you, this is a no-no.
  • Uses Unix/Linux cronjobs for fetching feeds, which is good if you can get it – and know how to set it up, but not all can or do.
  • Seems unreliable when used without Unix cron. Campaigns are not processed at all, or processed at the wrong time intervals.
  • Bugridden – small bugs such as campaigns not resetting properly, when reset. Complete campaigns and posts have to be deleted if one wants to re-fetch a feed to test a new configuration.
  • Uncertainty if the plugin is supported and developed further by it’s developer. Last release is from October 2007. Guillermo (who has now turned 17) recently announced his continued support for WP-o-Matic and the release of a new version in the near future, along with a new website specifically for this plugin.

FeedWordPress

I initially had problems with feeds from Google Reader (and Twitter, for that matter) – titles showed, but content disappeared. At first I thought this was a general problem with Atom feeds, but it turned out it’s because WordPress (even the latest versions) comes bundled with an outdated Magpie RSS parser. At first glance, the problem wasn’t fixed by exchanging the rss.php and rss-functions.php with the updated ones bundled with FeedWordPress, but reinstalling these files and re-entering the feeds did in fact solve the compatibility problems with Atom feeds. At first, coming from WP-o-Matic’s advanced campaigns setup, I wasn’t impressed with the interface provided by FeedWordPress initially, and the hazzle I had with Atom feeds gave me the impression that this plugin was no match for WP-o-Matic. But as I worked with it, FeedWordPress turned out to be an extremely competent agent for the job.

Pros

  • Extensively well documented
  • Seems to be the more stable and reliable candidate of the two. Works great with WordPress’ built-in cron alone.
  • Built-in API for WP themes and plugins to use
  • Maintained, supported and seems to be actively developed by the developer (last build 8 May 2008)
  • Works great with timestamps – fetches all timestamps from feeds 100% correctly.

Cons

  • Can’t add custom text or code to the posts of each particular feed, except if one utilizes the API. If one utilizes the API from a WP theme, custom changes will apply to all syndicated posts, when they are displayed on the site. This is a solution in cosmetics only, in that the custom layout and text is applied only in the visuals – and not reflected in the actual contents of a post. One has to access the API from within a plugin, which hooks itself up with an action or filter in WordPress, to actually ‘inscribe’ posts with custom text or code, which stays with the post, no matter how it is skinned or re-published by other sites. This requires a bit of PHP coding/hacking skills.
  • Can’t import tags. Tags can be imported by FeedWordPress as new categories, however, which somewhat alleviates the problem, but forces you to go with the category system over tagging or both.

Conclusions

Both these plugins reviewed here possess tremendous power, at the point of your fingertips. None of them are perfect, however, and both still need work, but I’m impressed with both. What they can do, and the power and speed of which these plugins work, is impressive. I’d love to have FeedWordPress feature the powerful customization scheme of WP-o-Matic, and I’d really like to have WP-o-Matic use the WordPress cron so reliably and steadily as FeedWordPress does. And I’d really really like to have WP-o-Matic just get timestamps right, with the ease of FeedWordPress.

However much I adore the flexible and powerful customization interface (the ‘campaign’ setup) of WP-o-Matic, we have to go with the more stable candidate of the two, which is FeedWordPress, IMHO. Especially since we can’t get cron right now, and are reluctant to pay for it right now, if we can get something which works great at this level, without paying for it.

We’re going with FeedWordPress, for these reasons mainly :

  • It works well, even without setting up cronjobs (using WordPress’ built-in cron).
  • It deals well with timestamps. There’s no messing around with the chronology of posts.
  • It is the best documented plugin of the two, and it has an API which makes it easy for us to tweek it for our uses.
  • And we have greater trust in it’s developer Rad Geek/Charles Johnson to continue support and development for this plugin.

When using free software plugins, I find picking the ones you want to use comes down to what killer feature you really want and which developer you trust the most to deliver it and continue development and support.

flattr this!

Tags : , , , , , , , , , , ,  

Get WordPress MU To Stop Worrying And Love Embedded Stuff

Kaplak Stream is based on a WordPress MU install (currently v2.6.1), where a network of niche sites are fed one or more feeds on a particular subject in the ‘stream’ or from particular online services, using feed aggregation tools.

Building the setup for Kaplak Stream so far has revealed a path ridden with challenges (as one might expect). WordPress MU, which is a tremendously powerful package, is not as widely used as it’s popular little sister, and therefore is less well documented and supported, which goes too for the compatibility and effects of various plugins.

One initial thing which gave rise to some trouble, was to get WordPress MU to stop worrying and love embedded stuff such as YouTube videos and widgets. WordPress MU was designed for great environments hosting thousands of blogs, with thousands of different users, and has a higher security threshold than regular WP. And there’s no way to turn this filtering of tags off in the Admin interface.

Now, there’s a plugin called Unfiltered MU which will remove this filtering of posts and thus allow the embedding stuff. Unfortunately this plugin works only with posts actually published using the Admin interface editor. It doesn’t work with imported posts (from your old single-WordPress setup), and apparently it doesn’t work with aggregated posts either. So if you setup MU and want it to import an old blog or set it up to aggregate items from a feed, you still got trouble.

I found out one has to manually edit kses.php to enable the tags used by embedded stuff, at one’s own peril. For our purpose, however, we’re not concerned with security in the sense that we are the only users of our system, for the time being.

At your own peril (I underscore the fact that you may put your setup at risk enabling these HTML tags, but hey, life is dangerous) : Put in these tags and something along the lines of the below code into your “allowed” arrays in kses.php : object, embed, param, script.

'object' => array (
			'id' => array (),
			'classid' => array (),
			'data' => array (),
			'type' => array (),
			'width' => array (),
			'height' => array (),
			'allowfullscreen' => array ()),
'param' => array (
			'name' => array (),
			'value' => array ()),
'embed' => array (
			'id' => array (),
			'style' => array (),
			'src' => array (),
			'type' => array (),
			'height' => array (),
			'width' => array (),
			'quality' => array (),
			'name' => array (),
			'flashvars' => array (),
			'allowscriptaccess' => array (),
			'allowfullscreen' => array ()),
'script' => array (
			'type' => array ()),

Pick the ones which you need for your videos or other embedded media to work. Allowing the ones listed will allow video embeds from most providers, incl. YouTube, Google Video, Viddler, Blip.tv and others as well as widgets from a lot of sources. It works on posts aggregated by FeedWordpress for instance, which was my problem with the “Unfiltered MU” plugin.

flattr this!

Tags : , , , , , , , ,  

The Anthropology of YouTube

I can’t say how much I enjoyed this video of a talk by cultural anthropologist and media ecologist professor Michael Wesch of Kansas State University, famous for his extraordinary video on web 2.0, which gained enormous popularity in the YouTube community.

Now, in this video Wesch shares his thoughts on YouTube as a historical, social and cultural phenomenon, which is as entertaining as it is insightful, on the complete pallet of workings of the new order of the web, of which YouTube is a great example. Please enjoy :

Thanks, once again to Raymond for the tip on this video.

flattr this!

Tags : , , , , , ,