Tuesday 25 December 2012

Indian IT in 2013- Some basic things to know

Indian IT in 2013 has a great opportunity but the point is that the Indian IT industry must have to grab this opportunity and utilize it to make a good business. So here are few points which Indian IT industry need to know for Indian IT 2013 Season-

IT export business-:   Nasscom expects India's IT exports business to meet the lower end of 11%-14% growth in FY12. However, analysts have a more conservative outlook due to the slowdown in Europe and continuing uncertainty in the US.


Cloud -:   Cloud remains a buzzword, with insiders expecting significant pick up in actual contracts in 2013. As cloud helps in cutting capital expenditure, cost-conscious large corporations are testing the waters.


Data Analytics-:   From large retail chains to investment banks, corporations across the world are leveraging data analytics technology for better decision making and to predict customer behaviour.


Mobility-:    India's IT providers create mobile applications for some of the world's largest organisations. As the number of mobile phones and tablets increase, this segment is expected to be an important revenue generator


Rising visa costs & denial rates-:  
India's IT services sector is facing increased scrutiny from US immigration officials as the political backlash against outsourcing continues. Industry body Nasscom estimates that in 2012, the United States issued 102,000 work visas, of which 60% went to the Indian IT industry. 

Analysts expect the Indian IT industry to witness higher visa denial rates and visa costs in 2013. 

Higher technology adoption and outsourcing by Indian corporations and government is expected in 2013, opening up new opportunities for IT services companies with meaningful India presence.

Consolidation in IT sector-:    If 2012 was the year of acquisitions for large-caps Infosys and Wipro, in 2013 mid-tier IT providers are expected to be active on the consolidation front. Companies like Hexaware, L&T Infotech, iGate, Persistent Systems etc are said to be scouting for possible acquisitions. 

In 2012, Infosys bought Zurich-based consultancy firm Lodestone, while cross-town rival Wipro bought Australia-based analytics provider Promax; Mumbai-based Tech Mahindra made two acquisitions.




Source- ET 

Sunday 23 December 2012

SMBs Can Be More Secure With Cloud, Virtualization, Bloyds




In a smaller shop, this is usually out of necessity. As well all know, SMBs usually don't have the luxury of an expansive technology budget that would allow for extensive resource specialization, meaning only rarely is there budget for dedicated information security staff. Because of this, security tasks usually fall to mainline IT.

Because the stakes of information security can be higher in an SMB -- consider a breach's cost impact as a percentage of an SMB's revenue or the downtime impact of single location being unavailable during a natural disaster -- security can becomes a veritable recipe for keeping SMB technology pros up at night and generally stressed out.


Run With the Big Dogs

Fortunately though, some recent changes in the way enterprises consume IT services can help offset areas that have been traditional information security pain points for SMBs. Specifically, changes like increased adoption of cloud, expanded use of virtualization, and even BYOD can have some beneficial effects in a budget-strapped, security-conscious SMB.

We often hear about the security downsides of these changes -- and it's true the potential exists -- but SMB's have a few advantages in adopting these technologies that their larger cousins don't. Since these trends aren't likely to go away anytime soon -- far from it -- planning for them now gives SMBs a chance to make use of their natural strengths to adopt them successfully and securely.

How are these technologies beneficial for SMB?

It sometimes strikes folks as surprising that the technologies outlined above can actually have a security benefit. After all, aren't we always hearing how dangerous the cloud is and about the perils of a lost or stolen laptop? However, keep in mind that many of the same dynamics that make these technologies challenging to adopt in a large enterprise, at least from a security standpoint, can have the reverse effect in a smaller organization.

The Cloud

For example, one of the biggest challenges for cloud is when multiple cloud technologies are employed simultaneously. For example, consider a situation where a single organization employs multiple service providers, an array of internal private cloud deployments (all at varying levels of sophistication and implementation success), and a wide array of different usage scenarios throughout a number of disparate business areas.

You can imagine how difficult that would be to secure. Not only is there the legwork in discovering and vetting all of the different technologies -- some in house, some not -- and the complexities of monitoring all of the security controls and SLAs, but you also need to stay apprised as new technologies are brought online.

In an SMB, these challenges have the potential to be reduced. For cloud, it's likely that the number of deployments, both internal and with various service providers, will be fewer. This in itself has a potential benefit as it reduces the overhead associated with vetting and monitoring numerous deployments. However, it's also the case that the number of nooks and crannies where individuals might be branching out and making use of new services in unexpected ways (for example, developers making use of AWS without approval from IT) are fewer as well. It's not that this won't happen -- it still will -- it's just that the likelihood of discovering it quickly is higher because there are fewer instances and you're more likely to hear about it.

Managing Virtualization

Virtualization technologies also get more complex with size -- managing 1,000 virtual images is a whole different kind of problem than managing 100,000. Not to mention that dissonance occurs quite frequently in control implementations -- and supporting policy -- when the legacy environment and the virtualized environment are supported at the same time. In other words, the technical and operational aspects of security are most difficult to manage during the transitional period where both the legacy environment and the virtual environment are in use.

For the SMB, the transitional phase of a virtualization initiative has the potential to be shorter when compared with a large organization. Why? Because there are fewer physical machines potentially in scope for migration. Also, the surface area of the most problematic situation -- legacy business applications -- is likely to be smaller.

The impact of this is that the SMB can move more quickly to the post-virtualization phase, which in turn means it can focus on refining the operational controls and processes suited to a virtual environment rather than on maintaining parallel sets of controls and processes across both traditional IT and the virtual space.

Dealing With BYOD

Lastly, BYOD. When it comes to the complexities of maintaining a managed corporate endpoint, we all probably feel that pain acutely. No matter what the size of the environment, it always seems like the security hygiene tasks required to support the endpoint keep us running in place: patching the OS, patching third-party applications, keeping anti-malware software current, backing up data, dealing with user login issues, etc. These are hard to keep running smoothly -- especially on a limited budget. Keeping corporate-provisioned mobile devices locked down and current is similarly complicated.

However, a strategic BYOD initiative -- i.e. one focused on reducing the number of managed mobile devices -- can help offset some of these issues. For example, if you allow employees to make use of a laptop that they themselves provision and potentially leverage a virtual machine image or remote access environment for access to corporate services like email, these tasks can become easier.

Why? Because you can more easily and quickly update the virtual endpoints compared with a physical device. For example, rather than remotely updating machines via an agent, you might choose to just swap out the VM they're using with one with different or update security controls. Employing BYOD mobile technologies, assuming you've thought it through and have deployed a comprehensive set of controls, can have a similar effect on corporate-issued mobile devices. It can help you focus on securing the services, regardless of how they're accessed, rather than focusing just on securing the device. Also, as with virtualization, SMBs can get there faster because of their smaller technology footprint.

Friday 21 December 2012

Cloud computing: Only 5% techies are ready for jobs


While cloud computing is widely recognised as the next big opportunity to watch out for, it has already made significant inroads in the industry. However, the IT workforce may not keep pace with the developments. The industry ready workforce for new domains like cloud and mobility may be as less as 5.7% of the current IT workforce.

The global IT workforce is estimated to be around 20 million out of which around 3 million are in India.

The Indian cloud market may grow by more than 70% in 2012 as per the Indian Cloud Market Overview 2011-2016 report by International Data Corporation (IDC). It stood at $535 million in 2011. The report also said that the market is likely to grow 50% over the next three years.

While it is not that IT jobs are disappearing, there is a clear shift in the nature of the jobs. While traditional IT jobs are likely to get automated, jobs in newer domains like cloud computing will be created.

The HCL report further says that the churn of experienced employees from IT to other sectors has gone up by 15%-20 % over the past year.

It is becoming harder for firms to find people when filling up positions which require particular skill sets related to cloud. AMP Technologies, founded by Sabeer Bhatia, on Thursday announced that the complete development of their cloud based product was done in Chennai. Rujuta Rammohan who heads the HR at AMP told that it was very difficult to find the 15 people for development , especially the few with specific cloud computing skills. "It took us around six months to recruit all of them," she said. "Most of the companies just outsource to specialised cloud computing firms or they start training some of their own employees."

A Gartner report noted that 'externalization' will be a big factor in the future of IT services - the first choice of buyers is to consider "sourcing" instead of building IT solutions.




Source- Times of India.

Wednesday 19 December 2012

India is on the verge of an internet boom: Study


Internet services and devices industry has potential to increase its contribution to up to 3.3 per cent to India's gross domestic product by 2015, a report by McKinsey & Company said.

"The internet currently contributes a modest 1.6 per cent to India's GDP in line with most aspiring countries. This could grow to 2.8-3.3 per cent by 2015, if India achieves its potential for growth in the number of internet users and Internet technology related consumption and investment over this period," it said.

This would increase internet's contribution to GDP from $30 billion now to nearly $100 billion in 2015.

At present, India's information and communication technology (ICT) exports are the most significant component of the internet's impact on GDP but private consumption and investment from private and public sector have greater portential to grow in the future, the report said.

India with a user base of 120 million is the world's third largest internet market and is poised to have up to 370 million users in 2015.

China has the largest internet user base in the world with 480 million users followed by the US with 245 million users as of 2011. By 2015 China is expected to have 583 million users, while the US is expected to have 279 million users.

The global internet user base is expected to grow from 2,300 million users in 2011 to 2,662 million by 2015, the McKinsey & Company said in the report.

Monday 17 December 2012

With Big Data, Don't Forget Compliance and Controls



9 tips for securing Big Data
  1. Think about security before you start your big data project. You don't lock your doors after you've already been robbed, and you shouldn't wait for a data breach incident before you secure your data. Your IT security team and others involved in your big data project should have a serious data security discussion before installing and feeding data into your Hadoop cluster.
  2. Consider what data may get stored. If you're planning to use Hadoop to store and run analytics against data subject to regulation, you will likely need to comply with specific security requirements. Even if the data you're storing doesn't fall under regulatory jurisdiction, assess your risks - including loss of good will and potential loss of revenue - if data like personally identifiable information (PII) is lost.
  3. Centralise accountability. Right now, your data probably resides in diverse organizational silos and data sets. Centralising the accountability for data security ensures consistent policy enforcement and access control across these silos.
  4. Encrypt data both at rest and in motion. Add transparent data encryption at the file layer. SSL encryption can protect big data as it moves between nodes and applications. "File encryption addresses two attacker methods for circumventing normal application security controls," says Adrian Lane, analyst and CTO of security research and advisory firm Securosis. "Encryption protects in case malicious users or administrators gain access to data nodes and directly inspect files, and it also renders stolen files or disk images unreadable. It is transparent to both Hadoop and calling applications and scales out as the cluster grows. This is a cost-effective way to address several data security threats."
  5. Separate your keys and your encrypted data. Storing your encryption keys on the same server as your encrypted data is similar to locking your front door and then leaving the keys dangling from the lock. A key management system allows you to store your encryption keys safely and separately from the data you're trying to protect.
  6. Use the Kerberos network authentication protocol. You need to be able to govern which people and processes can access data stored within Hadoop. "This is an effective method for keeping rogue nodes and applications off your cluster," Lane says. "And it can help protect web console access, making administrative functions harder to compromise. We know Kerberos is a pain to set up, and (re-)validation of new nodes and applications take work. But without bi-directional trust establishment, it is too easy to fool Hadoop into letting malicious applications into the cluster, or into accepting the introduction of malicious nodes - which can then add, alter or extract data. Kerberos is one of the most effective security controls at your disposal, and it's built into the Hadoop infrastructure, so use it."
  7. Use secure automation. You're dealing with a multi-node environment, so deployment consistency can be difficult to ensure. Automation tools like Chef and Puppet can help you stay on top of patching, application configuration, updating the Hadoop stack, collecting trusted machine images, certificates and platform discrepancies. "Building the scripts takes some time up front but pays for itself in reduced management time later, and additionally ensures that each node comes up with baseline security in place."
  8. Add logging to your cluster. "Big data is a natural fit for collecting and managing log data," Lane says. "Many web companies started with big data specifically to manage log files. Why not add logging onto your existing cluster? It gives you a place to look when something fails, or if someone thinks perhaps you've been hacked. Without an event trace you are blind. Logging MR requests and other cluster activity is easy to do and increases storage and processing demands by a small fraction, but the data is indispensable when you need it."
  9. Implement secure communication between nodes and between nodes and applications. To do this, you'll need an SSL/TLS implementation that protects all network communications rather than just a subset. Some Hadoop providers, like Cloudera, already do this, as do many cloud providers. If your setup doesn't have this capability, you'll need to integrate the services into your application stack

Sunday 16 December 2012

Only 0.5% of world's data being analysed: Study

The number of internet connected devices has been growing so fast that the digital universe has doubled in the last two years to an unprecedented 2.8 ZB. ZB stands for zettabyte and one zettabyte is equal to 10 raised to 21 bytes. In comparison one MB is 1000 bytes. And this growth is showing no signs of slowing down as International Data Corporation (IDC) is predicting that this value will reach 40 ZB by 2020, an amount that exceeds previous forecasts by 14%. 

These are the results of the EMC-sponsored IDC Digital Universe study, "Big Data, Bigger Digital Shadows, and Biggest Growth in the Far East"— which found that despite the unprecedented expansion of the digital universe due to the massive amounts of data being generated daily by people and machines, only 0.5% of the world's data is being analyzed. 

The digital universe will double every two years between now and 2020. A major factor behind the expansion of the digital universe is the growth of machine generated data, increasing from 11% of the digital universe in 2005 to over 40% in 2020. 

Large quantities of useful data are getting lost: The promise of Big Data lies within the extraction of value from large, untapped pools of data. However, the majority of new data is largely untagged file-based and unstructured data, which means little is known about it. In 2012, 23% (643 exabytes) of the digital universe would be useful for Big Data if tagged and analyzed. However, currently only 3% of the potentially useful data is tagged, and even less is analyzed. 

Much of the digital universe is unprotected: The amount of data that requires protection is growing faster than the digital universe itself. Less than a third of the digital universe required data protection in 2010, but that proportion is expected to exceed 40% by 2020. In 2012, while about 35% of the information in the digital universe required some type of data protection, less than 20% of the digital universe actually has these protections. 

While emerging markets accounted for 23% of the digital universe as recently as 2010, their share is already up to 36% in 2012. By 2020, IDC predicts that 62% of the digital universe will be attributable to emerging markets. By 2020, China alone is expected to generate 22% of the world's data. 

As the infrastructure of the digital universe becomes ever more connected, information won't reside within the region where it is consumed, nor will it need to. By 2020, IDC estimates that nearly 40% of data will be "touched" by cloud computing (private and public), meaning that somewhere between a byte's origination and consumption, it will be stored or processed in a cloud.



Source- http://timesofindia.indiatimes.com/tech/enterprise-it/services-apps/Only-0-5-of-worlds-data-being-analysed-Study/articleshow/17601472.cms 

Wednesday 12 December 2012

Seminar Cum Free Counselling Session by Rrootshell Technologiiss

Rrootshell Technologiiss is going to organize a "Seminar Cum Free Counselling" session in “Technology & Carrier Scope on IMS”


The Indian vendors account for a market size of $1.4 billion growing at a CAGR of 44 percent, reveals a Zinnov study on infrastructure management services. The current global market size of IMS is estimated to be $370 billion and is expected to grow at a CAGR of 4 percent over the next four years. It also states that the current IMS spend is expected to be around 12 percent to 20 percent of the total IT budget of the companies, which will certainly rise in the coming years. Almost 40 percent of the global IMS market is outsourced, which is around $150 billion.

So the IT professional requirement is gonna be huge in the future and the people looking for career in this area has got a huge Scope.

So Rrootshell Technologiiss has taken a initiative to create awareness about the industry, among the IT professionals in Bangalore and also showing them a way forward in this field. And the councelling session will  help all the IT professional to unveil all the hidden opportunity in these area.

These initiative seems to be good opportunity for all who are already there in this field and also for those who are planning to take up this IT field as their career. 

These kind of Initiative must be taken forward and should not be stopped after 1 or 2 session, because technology and requirement keep on coming but the awareness is very low and these kind of initiative is truely gonna help to those who wants to work in this field but not aware of what is happening  or what they need to do.





Sunday 9 December 2012

Top 10 categories for Big Data sources and mining technologies

Most discussions on organizing Big Data center on repository frameworks – specifically Hadoop clusters and MapReduce frameworks. This technology-focused view often overlooks the most important question, “What are you planning to do with the data you’re collecting?”

Since every answer will be different, this means there’s no one-size-fits-all solution. Success lies in recognizing the different types of Big Data sources, using the proper mining technologies to find the treasure within each type, and then integrating and presenting those new insights appropriately according to your unique goals, to enable your organization to make more effective steering decisions.

A Taxonomy of Big Data sources and technologies
For this process let’s define the two buckets for organizing your Big Data – the sources for Big Data, and the technologies to mine those sources.

Here are the Top 10 Big Data source types and the corresponding mining techniques that might be applied to find your gold nuggets.

1.    Social network profiles—Tapping user profiles from Facebook, LinkedIn, Yahoo, Google, and specific-interest social or travel sites, to cull individuals’ profiles and demographic information, and extend that to capture their hopefully-like-minded networks.   (This requires a fairly straightforward API integration for importing pre-defined fields and values – for example, a social network API integration that gathers every B2B marketer on Twitter.)

2.    Social influencers—Editor, analyst and subject-matter expert blog comments, user forums, Twitter & Facebook “likes,” Yelp-style catalog and review sites, and other review-centric sites like Apple’s App Store, Amazon, ZDNet, etc.   (Accessing this data requires Natural Language Processing and/or text-based search capability to evaluate the positive/negative nature of words and phrases, derive meaning, index, and write the results).

3.    Activity-generated data—Computer and mobile device log files, aka “The Internet of Things.” This category includes web site tracking information, application logs, and sensor data – such as check-ins and other location tracking – among other machine-generated content.  But consider also the data generated by the processors found within vehicles, video games, cable boxes or, soon, household appliances.  (Parsing technologies such as those from Splunk or Xenos help make sense of these types of semi-structured text files and documents.)

4.    Software as a Service (SaaS) and cloud applications—Systems like Salesforce.com, Netsuite, SuccessFactors, etc. all represent data that’s already in the Cloud but is difficult to move and merge with internal data.  (Distributed data integration technology, in-memory caching technology and API integration work may be appropriate here.)

5.    Public—Microsoft Azure MarketPlace/DataMarket, The World Bank, SEC/Edgar, Wikipedia, IMDb, etc. – data that is publicly available on the Web which may enhance the types of analysis able to be performed.  (Use the same types of parsing, usage, search and categorization techniques as for the three previously mentioned sources.)

6.    Hadoop MapReduce application results—The next generation technology architectures for handling and parallel parsing of data from logs, Web posts, etc., promise to create a new generations of pre- and post-processed data.   We foresee a ton of new products that will address application use cases for any kinds of Big Data – just look at the partner lists of Cloudera and Hortonworks.   In fact, we won’t be surprised if layers of MapReduce applications blending everything mentioned above (consolidating, “reducing” and aggregating Big Data in a layered or hierarchical approach) are very likely to become their own “Big Data”.

7.    Data warehouse appliances—Teradata, IBM Netezza, EMC Greenplum, etc. are collecting from operational systems the internal, transactional data that is already prepared for analysis.  These will likely become an integration target that will assist in enhancing the parsed and reduced results from your Big Data installation.

8.    Columnar/NoSQL data sources—MongoDB, Cassandra, InfoBright, etc. – examples of a new type of map reduce repository and data aggregator.  These are specialty applications that fill gaps in Hadoop-based environments, for example Cassandra’s use in collecting large volumes of real-time, distributed data.

9.    Network and in-stream monitoring technologies—Packet evaluation and distributed query processing-like applications as well as email parsers are also likely areas that will explode with new startup technologies.  

10.  Legacy documents—Archives of statements, insurance forms, medical record and customer correspondence are still an untapped resource.  (Many archives are full of old PDF documents and print streams files that contain original and only systems of record between organizations and their customers. Parsing this semi-structured legacy content can be challenging without specialty tools like Xenos.)

It’s how you use it
We’ve yet to see conversation about looking at an organization’s management of Big Data as the multi-layered process that it is. Our litmus test will not just be how well we capture Big Data, but also how we organize it, visualize it, and operationalize it – to derive big value from Big Data investments.  Choosing the right technologies for culling value from the variety of Big Data sources is the next discussion we need to have, once we move beyond high-fiving each other because “it works with Hadoop!”

Of course, BIRT (Business Intelligence and Reporting Tools), the Eclipse open source project that serves as the foundation for the ActuateOne product suite, supports Hadoop.  But the real question to ask is, "For what?" This is where the real interesting discussion begins because it marks the crossover point for traditional Business Intelligence and Big Data.

Today the data architect is the Big Data expert, but imagine what will happen when you and I are reaping personal benefits from the Big Data that affects our own lives—traffic congestion might lessen; coupons will arrive on our phones for products we need, as we enter Target or WalMart; our grocery stores might warn us to throw out the milk before we have to sniff it; or we could discover the fundamentals of the Big Bang.

With the right management of Big Data, its potential is unbounded.  And this isn’t about technology “futures”…it’s happening right now.


Source- http://www.zdnet.com/top-10-categories-for-big-data-sources-and-mining-technologies-7000000926.