The practices and trends for software development is always in a continuous progress of improvement and changes. With the introduction of cloud platforms and providers, new patterns of working have been introduced that greatly improves the developers’ ability to control infrastructure and deployment.
It is now possible for a smaller group of developers, to configure and deploy large data center solutions without any manual installation or configuration of operating systems, virtual machines and services. These system parts are provided through configuration and automated tasks. This greatly increases the time to market, from code changes, to production ready systems.
Being able to quickly reproduce copies of the deployment environment, greatly reduces the historical issues with production configuration being different than development, test and staging environments. With the ability to scale up and down, and creating and destroying whole environments in the cloud, enables individual developer to do full and proper integration testing of their code changes against real-life environments that emulates the production environment fully, reducing surprises when moving new changes into production.
Another big trend is the explosion of platforms and devices for consuming content and using applications. This trend have changed how software is built, where the client apps and front end needs to be more agile and more responsive. Application logic does then naturally move back into the server infrastructure, with reduced logic running and executed on the clients.
This makes it important to build service based solutions, often in the form of APIs that can be consumed by any platform and client. Relying on one specific platform for building client software is no longer viable in the modern workplace, where devices are used from multiple manufactures running a multitude of operating systems.
Moving the majority of application logic back into the servers, will enable the system to be more agile in the client applications.
Rebuilding existing software is a major task, and therefore building the application logic in a service and API pattern, makes the system more resilient for future changes in the client apps that consumers the APIs.
With the introduction of HTML 5, the web technologies have become a better platform for building client applications, and the web technologies are the only platform that scales well across all devices and platforms. Even still, one need to be open and plan for more rapid changes in the client applications that previous in the software history, changes happen quickly and to be on the forefront and give users the best possible experience, the demand for rewriting the client parts of a whole system is higher now than ever before.
There are plenty of examples that often end up being more elaborate than what is actually needed, so I built this minimal example which is the layout shell for a top-secret web game I’m working on.
My example runs on Internet Explorer 11, FireFox 35 and Chrome 40. As this is a template for a game, the main area consists of an canvas, but could easily be replaced with a content container for a regular web app or web site.
There are no vendor-prefix in the CSS code, so this will only work for the latest versions that all comply to the latest CSS and HTML standards. More details on my new game project will be released in my blog in the coming months.
The day have come for the public availability of Flick Downloadr 3. This is a photo download tool I first released back in 2007, later updated in 2009 to version 2 and now, after many iterations the third release is available.
Back in 2010, I had a prototype built on WPF, it was never released as an update. Next I started working on one built on HTML-technologies and using Adobe Air as the delivery mechanism. Adobe Air embeds the Chrome-runtime. Later I figured using the Chrome Web Store, as this will improve the delivery and cross platform support.
After a lot of work, and constant refactoring and changes, the first public release is now available. You can install it in the Chrome Web Store and check out the product website.
If you continue to read, I’ll explain some of the work that went into the making of Flick Downloadr 3.
How it was made
As mentioned, this app went through many rewrites and changes in the technology. This was primarily due to getting to learn new technologies, to gain the experience and knowledge.
The UI itself have evolved, from a completely custom UI to one that is now built on Material Design guidelines by Google.
Some time before the end of 2014, I decided to reduce the amount of functionality to speed up time to market. In doing this, the new version lacks some features that existed in the previous version. I’m planning on adding these features moving forward, but ensuring high quality in the base functionality, I decided to remove all the unnecessary features and focus on simplicity and primary functionality.
First iteration of OAuth token service was built on ASP.NET SignalR using Web Sockets, eventually I turned to Node.js, Express and socket.io as a replacement. Luckily there was NPM packages available for some of the Flickr API calls, so the rewrite was not that hard. After a while, I replaced socket.io with regular HTTP REST calls, due to Web Socket limitations on Azure websites.
The service is hosted on Azure website and utilizes DocumentDB for storage, which is very easy to use in Node.js thanks to the NPM package.
Big thanks to all all the developers who have built open source components used by this app, and thanks to all existing users of my app. Please give feedback and help steer the development in the future!
Google released their Material Design icons a while back, these are available in pre-processed combined SVG files, CSS sprites with PNG, and individual SVG and PNG files for iOS and Web.
These can be installed with bower or npm:
npm install material-design-icons
When building a web app (or website) it’s important to reduce the network requests and traffic, avoiding to much overhead when loading. That’s why you need a way to packaging only those (Material) icons you need.
First you should copy the individual .svg files from the downloaded material-design-icons folder, into your project (e.g. images/icons/svg). As an example, for my Flickr Downloadr project my resulting .svg file is 5 KB, compared to only the svg-sprite-actions.svg that comes at a relatively hefty ~56 KB, If you need icons from different categories, you’ll get a total of ~250 KB.
To process these files, you need the node package named svg-sprite. And if you want to automate using Grunt, you need to get grunt-svg-sprite.
Tip: When you install this package, it compiles using Visual Studio compiler. If you don’t have Visual Studio 2010 installed, you can use the following switch to use another version of Visual Studio, such as 2012 and 2013.
That command will update your project.json with that dependency and it will ensure that the compilation uses Visual Studio 2013.
Register the grunt task in your gruntfile.js:
Then create your own configuration:
// Define the configuration for all the tasks
src : 'images/icons/svg',
dest : 'images/icons'
Now you can run this task with this command:
This will parse the images/icons/svg folder, then generate two files inside the images/icons folder. One named “sprites.svg” and another named “icons.css”.
Include the icons.css in your web project and start to use your icons with the generated classes. The class identifiers is same as filename.
This process will obviously work fine with custom icons that are not part of the Material Design icon package. This makes it easier for you to mix and match SVG icons from different sources and get a nicely generated and minimized SVG and autogenerated CSS.
Having a good text editor when you are writing source code is important. With almost two decades of experience with writing code, I have had my share of time with different editors. In these times with modern web apps, we have a different need than before and there is a lot of new players in the game.
In this article I’m only mentioning a few of the many editors I’ve tried throughout the years and in the last few months. Here you can find a list of HTML editors on Wikipedia, and I suggest you look around to find what fits your own needs.
A little background
I have used Visual Studio as my primary text editor for many years, though my favorite web editor of all times must be Homesite 3. It was fast, efficient and powerful. It was originally developed by Allaire Corporation, and acquired by Macromedia in 2001. Version 4 was at the time of release, a bit bloated and slow for the current computers compared to 3, so I relied on version 3 for a long time. Last version of HomeSite was 5.5, released in 2003 by Macromedia.
Macromedia was an amazing company in many ways, some younger developers getting into our industry might never even know that name, as the company was acquired by Adobe in 2009. The same guy that made HomeSite (Nick Bradbury), started building TopStyle when he left Allaire in 1998. TopStyle have a lot of features from Homesite, but it does belong to another age, with outdated and complex UI.
While my primary editor is Visual Studio, as I’m doing a lot of work with Microsoft .NET, for my Node.JS, Web Apps and other needs, I try to work with different editors to see which one is optimal when the requirements for features such as debugging is less. Visual Studio is a fully integrated development environment, it makes sense to have a separate text editor that is faster and more lightweight.
A lot of people use Notepad++ and Sublime Text. I do think those are some of the most widely used editors around. I do love Sublime Text, it’s fast and powerful. Yet there is a new breed of editors, that are built on a completely different foundation than previous editors. That’s the editors of the future, as they are built on the same technology that you build using them.
The new breed of editors is built on the Chrome/Chromium engine and some embed Node.js as well. That means it’s built on the same foundation as the Google Chrome Web Browser.
That means you get the same great developer tools to analyze and debug the editor itself. Additionally the editor is extensible with web technologies, as oppose to proprietary technologies that is used in some of the older editors available.
The first editor I started using actively built on Chrome, was the Atom text editor. It is developed by GitHub, which was a big reason for my to start using it. I have used it for many months already and followed it’s development. It’s a great editor, and I have written a couple of extensions for it.
So I started looking elsewhere, and I found Brackets text editor, which is developed by Adobe. Which was released in it’s first 1.0 release yesterday.
It is very similar to Atom in many regards, including it’s extensions. Some of the must-have extensions for both Atom and Brackets, is: Git support (built into Atom), File Icons (makes the different files more clearly distinguishable), Stylus (I recently moved to Stylus as my primary CSS pre-processor).
Building extensions to Brackets is very simple. All you need to do is open the extensions folder, available in the Help menu. Inside the “user” folder, you can create a folder for your extension. Within your extension folder, create a “main.js” and you are “done”!
Run Chrome App
My extension for Atom and Brackets is one that enables a run command for your Chrome Apps. This is something that is built into the Chrome Dev Editor by Google, so I wanted to replicate this to make it fast and easy to run a Chrome App while editing. If you want to quickly test your Chrome App on Android phone, I suggest checking out that editor. It relies on the app “Chrome App Developer Tool” that you install on your Android-device and I plan on adding support for this in Brackets using cca.
My new favorite text editor is now Brackets, while it used to be Atom. I suggest you try both!
What are your favorite text editor? Why do you use it and what makes it good? Leave a comment below!
Microsoft is working on the new major release of Windows, which will be released some time in 2015 and be named Windows 10. It’s available now in a technology preview. It’s not advisable for regular users to upgrade at this time. This time around, Microsoft will have a single OS that spans across all screens and devices: Phones, Tablet, Laptops, Desktops and big screen TVs (Xbox One).
We are now about to entering what I call, threshold to the cloud.
The following is based upon my own personal experience and memories of my time in the software industry. Memories can play tricks on us, so please remember that as I might not be entirely correct (I did verify the release dates and some other details).
We have now come full circle when it comes to software development. I did some (in my own view) impressive intranet-solutions back in 1999-2001, utilizing HTML features such as hidden iframe and DHTML to make rich web applications. These ran only on Internet Explorer version 5, 5.5 and later 6 that was released in 2001. At which time it had won the browser war and become the most widely used browser. From IE4 there was a very rapid release cycle and lots of “innovations” in terms of features extending the HTML specification. Some of those innovations stuck around, other’s disappeared.
With the growing popularity with Java (released 1996) as a development platform for Client and Server, and that Microsoft was forced to discontinue their own Java VM, Microsoft had to come up with an alternative platform to avoid loosing too many developers from their Windows-platform, and then .NET was born in early 2002 (beta version in 2001).
After IE6, they won the browser war and had +90% market share. That’s when Microsoft abandoned their browser, which effectively have held the World Wide Web back in development for a whole decade. Yes, the effects was a major step backwards for the software development world. The standards work came to a halt, HTML 4.01 was finalized in 1999. It’s now 2014 and HTML5 is in a proposed recommendation state.
Wired wrote about Bill Gates and his strategy letter The Internet Tidal Wave for Microsoft back in 1995:
“Gates proceeded to outline a strategy for Microsoft to not only enter the internet, but to dominate it.” – Wired
Their strategy after -95 was in some terms a great success, with a complete defeat of web browser competitors. It did have negative effects on the company, which have been found guilty in anti-trust cases in Europe. Before -95, they failed to see the importance of Internet.
– Microsoft failed to understand the Internet in 1995.
– Microsoft failed to understand the Web in 2001.
– Will Microsoft get it right the third time? I do think they will!
It took 5 years, 2006, for Microsoft to release Internet Explorer 7. Mozilla had major issues with bloated software, so FireFox was born. It was a long struggle to gain back market share. And eventually Google launched Chrome.
As this graphics show, it took a whole decade for the innovation to start growing in the browser space again.
Race of the giants
From the release of .NET, there was a race between Sun and their Java, and Microsoft with their .NET. This gave us technologies such as Windows Forms, Windows Presentation Foundation and more recently, Silverlight. The race was for the desktop client and the servers. Microsoft won the desktop easily, yet struggled more on the servers. ASP.NET Web Forms was a technology to more easily ease client developers onto the web. It somewhat accomplished that, but also added a lot of bad stuff to the web. At the same time, the open source communities had rapid innovation with projects such as Ruby on Rails. Microsoft responded with ASP.NET MVC released in 2007. Yes, almost as long time for Microsoft to upgrade their web browser as it took for them to truly understand the web development platform.
After the first version of ASP.NET MVC, Microsoft changed. They changed to a cycle of rapid releases, lots of great innovations. Their recent efforts with turning major parts of ASP.NET into the open source space, will help a lot. What’s happening with the next version of Visual Studio and the ASP.NET-platform is amazing and empowering to developers.
Apps comes to town
Apple have had unprecedented success with their App Store. The amount of apps developed and millions of money that app developers have received is amazing. It changed our life’s and it still is. Apps for everything in our lives. Here is a video that illustrates how everything on our desk have now become digital.
The traditional way of searching, finding, downloading and installing software is tiresome and prone to many errors. I have had to fix many computers that have ended up with a lot of malware. Having dedicated app stores for any platform, ensures that the games and apps are tested and verified. With Windows 8, Microsoft added their own digital store into the OS.
At the same time as the Windows 8 announced, it was clear that their software development strategy was about to change. The future was web technologies.
I believe that the strategy behind Windows 8 was a good and correct, but it failed the proper execution. Biggest issue was the separation of desktop and touch. The apps where all full screen, even a utility such as calculator. The market responded negatively, something had to change.
The title on this section is called Web Apps, with this I try to encapsulate all the world of HTML-based apps. Microsoft call them Universal Apps, Google call them Chrome Apps, Mozilla call them Open Web App. One thing for sure, there will continue to be changes in this space. All of these 3 platforms have gone through revisions of naming already, see my blog post Packaged Web Apps.
I’m betting that Web Apps will stick, it’s short and concise. I love it, Web Apps!
And I do realize that the term “Web application” is already widely used for different things, with different meaning to many individuals. I’m still betting on it to win.
Google have had web apps for a while now, enabling developers to build software using web technologies (HTML5) that runs on Windows OS, Mac OS, Chrome OS and Linux OS. That’s right, the old pipe dream of Java, write once run everywhere, was realized with web technologies.
On the threshold
Now we are on the threshold to the cloud, desktop apps, or rather web apps, will now link our computer desktops directly to the cloud. The lines between what is local and what is remote, will blur even more than what it already have. Apps will update automatically, in the same way websites have for years.
I believe we are living in interesting times, as we did more than a decade ago in 2001. The DotCom crash hurt our industry a lot, and one can only speculate if that might be part of the reasons why Microsoft suspended their Internet Explorer efforts. I don’t know the historic details of that tail, other than what has been publicly made available throughout the years.
I have for years pushed web technologies, HTML5, as the future of software development. Now is the time to get serious, go develop web apps.
The final proof that we are at the threshold, have a look at my screenshot that shows two versions of the same app running, one Universal App and one Chrome App. Enjoy!
One Store to rule them all?
One final thought: Is there room for two app stores on Windows? Will developers be on both platforms?
We have to remember that, even though Windows is the most important platform in regards to market share, developing Universal Apps for Windows devices means that your apps will only run on those devices. I don’t think many developers would want to leave OS X, iOS, Android, Linux and a whole range of other platforms behind.
I believe that web technologies is the answer to this question, it enables developers to make software that more easily can be deployed using different mechanisms and platforms. The code-reuse across Windows Store Apps and Chrome Apps can be immense, if you plan for it and develop with a cross-platform in mind.
Here is another example of Amazon Kindle Reader, one is a Windows App the other is a Chrome App. Take care and be safe!
Back in January 2011 I wrote the first instructions on how to secure your site with SSL certificate on Windows Azure. Since then, both Azure and IIS have been updated, so I’m revising these instructions here.
Learn how you can create the CSR (Certificate Signing Request) for Windows Azure, using Internet Information Services on Windows Server. The CSR is used to generate the proper SSL by any certificate provider. You will know learn how to go through the process of securing your Windows Azure hosts and enable users to access your services over HTTPS.
Create a new Windows Server
As it is now possible to create virtual machines on Windows Azure, you could easily create a new VM on Azure if you don’t have any on-premisses Windows Server.
After the machine is provisioned, you can connect using Remote Desktop Client. You will find the public TCP port on the Endpoints page of the virtual machine. Connect to your Windows Server, either on Windows Azure, another provider or on any on-premiss server.
Install Internet Information Management
Choose the Add roles and features option on the Server Manager. Go through the wizard and select the Web Server (IIS) option on the Serve Roles step. Accept the dialog that adds required feature, the IIS Management Console.
Certificate Signing Request
First open IIS Manager and navigate to the root element for the web server. Open the Server Certificates by double-clicking on the icon, as seen in the screenshot.
On the right you will see the Actions options. Click the Create Certificate Request to start the wizard.
Fill out the fields in the wizard, in the Common name you will out your domain name.
Next step is choosing the bit length (strength) on the certificate. Choose a minimum of 2048, in this example I have chosen 4096 which is more secure, but require more computation and can be slower on high traffic sites.
Choose where to store the signed certificate request on your local computer.
Open the file in a text editor and copy everything. You need this in your application for SSL certificate.
Copy and paste the signed request to your selected SSL provider. There are many providers available, and there are different processes for verification and different levels of verifications. Make sure you research which type of certificate and verification that fits your requirement.
Installing and exporting SSL certificate
After you have supplied the request to your SSL provider, and have completed the other verification steps, you will receive one or multiple .crt files, often packed in a .zip.
You normally don’t need the extra certificates, such as the CA (Certificate Authority) certificates that are included. These certificates are normally already installed on your server.
Copy the www_domain_com.crt or similar named file to your Windows Server.
Next step is to install the SSL certificate on your local web site in IIS. We will install the certificate and later export it for use on Windows Azure.
Go back to IIS Manager and the Server Certificates window. Below the link we used earlier there is another one named Complete Certificate Request. Click this and complete the wizard. Note that IIS normally looks for files with the .cer extension, so you might have to choose the *.* option in the Open dialog, if your certificate is in the .crt format.
It’s OK to install the certificate in the Personal certificate store, you might get permission error if you try another.
Located the installed certificate in the Server Certificates view inside IIS. Right-click on the certificate and choose Export.
Pick a selection to store the .pfx, and enter a password. Make sure it’s a decent quality password, if you ever loose the .PFX you don’t want anyone being able to easily brute force the password. If you loose the PFX and the password, others will have access to the private key of your certificate and can use it to do malicious actions in various manners.
Important: Keep your PFX file safe and keep it’s password safe. It contains the private keys and shouldn’t be distributed widely.
Configure Certificate for Azure Web Role
Next step is to configure web roles in your cloud project within Visual Studio, to use the new certificate. First thing to do on your development machine, is to copy the .pfx file, double-click to open it, choose the store location to be Local Machine, fill out the password you entered earlier. As you already have an exported private key, within the .pfx file, you don’t need to check the Make this key as exportable.
Now you can open your Visual Studio solution with the cloud project. Expand the Roles folder and double-click your Web Role. find the Certificates tab, click Add Certificate. Fill out a identifier name, can be anything, choose the Store Location to be LocalMachine and the Store Name to be My. In the Thumbprint column, click the “…” button to open certificate selection dialog.
If you can’t find the certificate in the dialog, experiment with the various stores to see if you can find it. If you are unable to find it, you can manually install it using the Certificate Management Console add-in.
Navigate over to the Endpoints tab and add a new endpoint with HTTPS as the protocol, and select the certificate to be active for that endpoint.
Now you can launch your web project from Visual Studio and the local Azure-emulator will open two instances, one for HTTP and one for HTTPS. Don’t be afraid of the certificate warnings, these are normal. Your certificate are only valid for the production URL that you specified while ordering the certificate. Meaning that you will get a warning if you re-use the certificate for localhost, “dev.domain.com” and other sites. There exists wild-card certificates, which can be *.domain.com and can be used for many purposes. If you are building a big cloud solution, where you want to have custom domains for Azure Storage, etc. then you should apply for a wildcard certificate. Beware though, it comes with a premium price.
Simply choose to skip/ignore/accept the certificate for your localhost debugging and developing needs.
Adding Certificate to Windows Azure hosts
The last and final step before you deploy your updated web role, is to ensure that Azure have a copy of the certificate.
Login to the Azure Management Portal, find your Azure instance, navigate to the Certificates option. Choose the Upload a certificate link and find your .pfx file.
After the process is complete, you can deploy the updated version of your cloud project. Your site should now be fully functional with the ability to run over HTTPS for secure communication.
Securing your services with HTTPS is important to ensure the privacy and safety of your customers and users. Never allow anyone to authenticate their credentials with your site unless it’s with HTTPS. When you don’t use HTTPS, all the information the user enters on your web site can be sniffed and logged by third parties at various steps in the network from the client computer to your hosted server. In many cases, this data travels across multiple country borders.
Installing and configuring HTTPS certificates is sometimes hard, but I hope this walk-through makes you aware of the importance to use it and how quickly and easy you can get up and running with a valid SSL certificate.
If there is any questions, please leave a comment.
Here follows a working example on how to build a chromless Chrome App, which replicates the minimize, restore, maximize and close buttons with custom styling. This makes it possible for you to easily build your Chrome App with a totally custom chrome/window.
When you start developing Chrome Apps, you’ll quickly discover that the frame/chrome is not native on Windows 8. The frame is completely white, and it doesn’t indicate in any way if a window is active or in the background, other than inactive X (close) button, which changes from red to gray when the window goes into the background.
Here is how the frame looks like, and on a white background, you can’t really see the frame, it blends into the background.
The next screenshot shows the normal native Windows 8.1 frame/chrome.
The source is very simple and is built using AngularJS. The example app icon is provided by Pixel-Fabric.
Here is a screenshot from the structure and source of the index.html file.
Here is how the final results looks like.
The code is fairly simple and if you are already looking into Chrome App development, you should feel right at home.
I have developed rich web based applications since 1999. The first web apps ran solely on Internet Explorer, as it was the most advanced browser with features such as DHTML, iframe and later years the XML Http Request object.
Third party proprietary runtimes has outplayed it’s role. While Flash has been a great productivity tool and given us real multimedia features in the browser, HTML 5 has matured enough to take over that space. Silverlight by Microsoft had a short life, but is still the primary runtime for apps on Windows Phone.
Enter Packaged Apps
The basic concept of packaged apps, is that they are regular web apps but additionally includes an application manifest file that defines features such as app name and app icons. These are often in JSON or XML format. W3C has a Packaged Web Apps specification in recommendation state as of november 2012.
As you install one of these apps, they can launch in their own separate window outside of the browser. Depending on the browser, apps can be installed directly from a web site, or through a marketplace/app store.
While there are others, these are the two biggest browsers and their accompanied stores. One other example I would like to show, is Pokki, which I personally use.
While FireFox Marketplace can be installed and launched from the desktop, it appears their main strategy is a marketplace of HTML 5 apps and games that will be compatible with the coming mobile devices running FireFox OS. There is even an FireFox OS simulator you can install on FireFox.
The feature sets are different. Google Chrome used to allow you to create desktop shortcuts, but that feature has been removed (or hidden from me) when Chrome (development branch) was updated with the new app launcher. In my example screenshot, I have an shortcut to SkyDrive that was created before the new launcher arrived.
First screenshot shows the Firefox Marketplace opened on a regular FireFox instance. On the desktop, I have three web apps that has been installed on my computer. The top most window, is the Pulse app running in a separate window.
In this second screenshot, you can see the old SkyDrive desktop icon that launches the Microsoft SkyDrive web app in a separate window. The top most window, is the new app launcher in Google Chrome. You can right-click on the apps to set options on how to launch the various apps. As you can see on the left, I have launched the Kindle Reader web app in another separate window.
Why consider packaged web apps?
If you are afraid of change and want a stable foundation to build your apps and games on, you should consider something other than web technologies as of today. If you embrace this change and innovation, you can reap the benefits of learning these technologies early. Being early adopter always comes at a cost, but HTML 5 and packaged web apps is already starting to gain traction.
I’m working on multiple packaged web apps that will be released this and next year. Please feel free to contact me for any questions and help.
Here follows some more links to learn more about packaged web apps and how you can get started developing these on your own.
Here is a good way you can secure your documents in the cloud using Windows 7 and Windows 8. I’d like to note there is one alternative (among many) which I also use, which is TrueCrypt. The method described here relies on SkyDrive, VHD and BitLocker. Some if the tips does apply to the use of TrueCrypt.
Get Cloud Storage
Recently all the major cloud storage system have released a rich client for the Windows and Mac desktop OSes where you can sync a local folder, with your cloud storage. There are many smaller and bigger competitors in this space, the big players are Google, Microsoft and DropBox.
I’m using all of these three alternatives, with SkyDrive being my primary platform. Make sure you download the software for your cloud storage provider and setup the sync between cloud and your computer.
Create the Virtual Hard Disk
Next step is to launch the Computer Management console on your Windows computer. On Windows 8, open the File Explorer then choose the “Manage” button on the ribbon bar. This should open the Computer Management.
Navigate into Storage/Disk Management. Right Click on the Disk Management and choose the Create VHD option.
In the Location field, locate your synced cloud storage folder and type the name of your VHD file. For example “Documents.vhd”. I choose the option VHD and not the VHDX, to ensure the file is compatible with Windows 7. Pick whatever file size you want the virtual hard disk to be.
I choose the disk type to be dynamically expanding. It’s important to note that SkyDrive sync is intelligent and it won’t sync the whole huge file every time there is a change, it intelligently figures out the changes and only syncs those parts of the file.
After the disk has been created, you should run the Initialize Disk option. After Initialize, you have to right click on the partition and choose “New Simple Volume”. Assign the disk to a drive volume and perform a format from the wizard.
Turn on Encryption
The next step is important to ensure your file are stored encrypted and securely in the cloud. Here is one important step: You have to choose what encryption technology to use.
For my own needs, I’ve chosen BitLocker. On Windows 8, all you have to do is open the mounted drive, in my example here, the X: drive and choose the Manage tab from the ribbon bar on the top.
Choose the Turn on BitLocker option and complete the wizard that appears. Make sure you enter a strong (long) password. A very long sentence that you remember, is a lot better than some small amount of random characters.
One of the new options on Windows 8, is to store the recovery key for BitLocker with your Microsoft account. This will somewhat defeat the purpose of storing your virtual hard disk encrypted on SkyDrive, as you are giving away the key to unlock the drive. Print out the recovery key and store it on external USB drive.
Mount and Eject
To mount the virtual hard disk on any of your computers, just right click on the vhd file and choose Mount.
When you do this operation, you might see the following error message appear:
“Sorry, there was a problem mounting the file.”
You might at this point have discovered that the mounting actually worked just fine, the drive appears on your File Explorer, but there is one final step you need to complete. You need to unlock you drive.
Right-click on the drive from the File Explorer and choose the Unlock Drive… option. Enter your password and then you should have fully unlocked your secure and encrypted cloud storage drive.
As long as your VHD is stored in the cloud, you should ensure that it is encrypted with BitLocker. Additionally you should make sure that the password used for BitLocker is NOT the same as your Microsoft account.
If someone steals or guesses your Microsoft account password, they still won’t be able to look into your documents and files.
Make sure you take backup of your VHD files once in a while to a local hard disk. My suggestion here, is to copy the .vhd off the SkyDrive folder, then mounting that to a separate drive and finally run the operation to remote BitLocker from that copy of the VHD file. That way, you will have a backup of the VHD which is not encrypted in case of emergency.