Sitecore PowerShell Console in Visual Studio

A while ago Jakob suggested that putting the Sitecore PowerShell Console in Visual Studio might not be a bad idea. He even provided me with the boilerplate code that served as a stub for the module (Thanks a million Jakob!).

So after some struggling on my part the new module is now on the Sitecore Marketplace. There is really not much to write about. If you like PowerShell and Sitecore Rocks you will find it pretty neat. Otherwise I?m afraid those are not the droids you are looking for Uśmiech

Basically what it does is: it allows you to enjoy PowerShell automation while still skipping the web interface (that effectively is why you?re using rocks, right?).

Pre-Requisites are:

Installation is fairly straightforward. Once you download the zip file ? unpack it somewhere on your drive and run the install.bat within it. Once you restart your Visual Studio you?ll be able to do the following:

OpenRocksConsole 

Which should result in the following outcome:

RocksConsoleOpened

Feel free to contact me or post your questions as a comment below.

PS_detectiveRecently I?ve been asked to audit a site for one of our clients. Overall for a fairly seasoned Sitecore developer it?s rather obvious what to look for in a site and get a feel for whether an a solution is thoroughly thought through or just put together using brute-force development. You can usually tell if the implementation is sloppy or excellent, but how do you quantify that feeling to give the objective view to the person reading your report? Looking at the Sitecore Developer Network I?ve found the following set of recommendations. This is a great help with codifying how a proper Sitecore implementation should look like, what should we pay attention to and most importantly it?s a great reference when you?re trying to prove that your feeling is something more than just nitpicking but rather an industry standard that the developers should adhere to. I recommend strongly that you look at it and think how closely your practices match those that Sitecore recommends.

There is a small problem though. Not all of them are easy to asses, at least not without some clever tools in your toolbox. for example what do I do with a statement like:

Use TreelistEx instead of Treelist when showing very big trees ? like the Home node and its descendants ? or have lots of Treelist fields in one single item. TreelistEx only computes the tree when you click Edit whereas a Treelist will compute it every time it is rendered.

It might be fine in a small site to verify in a few data templates that it?s not violated, but  In my case I was dealing with a multisite platform that can potentially host tens or even hundreds of sites? Going manually over the hundreds of fields in nearly 300 data templates, bah even finding them ? would not be fun or easy thing to do. But hey? we have PowerShell why should I look there if I can whip up a one liner for it? Let?s try it then.

Read the rest of this article »

Posted in Best Practices, C#, CMS UX, PowerShell, Sitecore, Software Development, Solution, Uncategorized
1 Star2 Stars3 Stars4 Stars5 Stars (3 votes, average: 5.00 out of 5)
Loading...
| 58 Comments »

Continuous deployment in Sitecore with PowerShell

A few days back a budy from our Sitecore team has alerted me to this interesting question on StackOverflow which asks for automation of content promotion from one Sitecore instance to another. He suggested – and rightly so ? that the PowerShell Console could be used in that scenario. While this was always possible by simply writing it as a PowerShell code the latest version of the console added a few commandlets making building packages much easier.

The easiest approach is to build the package visually in the package designer, save it and then simply use the console to read it and generate the installation zip like:

get-package "powershell console.xml" `
  | Export-Package -FileName "PowerShell Console.zip" -Zip

That?s fine in most cases but if you have some more complex scenarios or want to generate some custom packages ? you might want to generate packages directly in PowerShell.

To Create a Package you simply use:

$package = new-package "Test Package";

Now that you have that package you might want to add some files and items to it.

Let’s add for example our item templates by querying the master database and creating a dynamic item source:

$TemplatesSource = get-childitem "master:/templates/Cognifide" `
  | New-ItemSource "Cognifide Templates";

And subsequently add it to our new package:

$package.Sources.Add($TemplatesSource);

While that by itself is fairly useful, the really cool part is that you have a full flexibility of PowerShell at your disposal when you create a source with static items. Let?s say you want to add all items of template ?Article Template? that reside anywhere under your ?home? node ? now that would require quite a bit of clicking in the Package Designer, but is trivial with the PowerShell Console:

$ArticlesSource = get-childitem master:/content/home/about-us/* -recurse `
  | where-object { $_.TemplateName -match "ArticleTemplate" } `
  | New-ExplicitItemSource "Cognifide Articles";

$package.Sources.Add($ArticlesSource);

You can specify any automation or filter you can think of to your Get-ChildItem, and you really don?t have to skimp on the number of data sources ? after all you can re-generate your package at any time!

Similarly you can do this to the files on disk. Let?s say ? you want to add all .aspx, .ascx and .ashx files, just to make sure your deployment features all the latest code and for the sake of this example let?s assume your UI elements are located in the Layouts folder under your web application:

$LayoutsPath = $AppPath+"layouts\*"
$Layouts = get-childitem $LayoutsPath -include "*.as?x" -recurse -force `
  | New-ExplicitFileSource "My Layouts";
$package.Sources.Add($Layouts);

Easy enough? now let?s add everything that is within the bin folder as a dynamic file source:

$BinFolder = New-FileSource "Bin Folder" -Root "/bin"
$package.Sources.Add($BinFolder);

That is it really? you may want to specify your package metadata which you would do like:

$package.Metadata.Author = "Auto generated " + `
  [DateTime]::Now.ToShortDateString();
$package.Metadata.Comment = "Isn't it cool?!";
$package.Metadata.Publisher = "Cognifide";

and then save it for later opening in package designer:

$package | Export-Package -FileName "test package.xml"

alternatively you can open such package as specified earlier

get-package "test package.xml"

if you ever wanted to add more sources to it or export as a zip file to be imported with the assets in your target environment:

$package | Export-Package -FileName "test package.zip" -Zip

? now on your target machine you need to upload your package to the Data\Packages folder. But then to install it all it takes is:

Import-Package "test package.zip"

Obviously all of it can be hooked to ribbon, context items, or be scheduled? but I get ahead of myself…

So how does it all relate to continuous deployment?

All of this can be completely automated, all you need to do is create a Script item as described in one of my previous posts and call the PowerShell execution URL referencing your script from your CruiseControl server or whichever continuous integration product you use in a fashion similar to:

http://myhost/Console/Layouts/PowerShellResults.aspx?scriptId={1680E211-BD28-49BE-82FB-DA7232814C62}&scriptDb=web

You need to deal with the fact that you are most probably not logged in with your continuous delivery environment ? in this case probably best approach is to use the web database or the script item my turn out to be unavailable to you and the script will not execute.

Now in your source environment your script will create the package and upload it to an FTP server (there is plenty of ways to do this from PowerShell? you can find a couple of samples on Stack Overflow) and subsequently call a second part of the script on the target server.

On the Target server ? a complementary script will be executed in the similar fashion ? by the originating server and if you don?t have direct access to the file on the FTP server you?ve just uploaded ? you can download it and import the package.

Now if you integrate the script with a ribbon in Content Editor on the source server (like described in the previous post) you can have a one-click-deployment solution on your dev machine, but then the REALLY cool part would be to integrate it with the context menu (as described in this post) and be able to push parts of the site to production with a single click! Not to mention your nightlies can really be nightlies if you do it using the scheduled tasks integration.

Context PowerShell Scripts in EPiServer

Ok, so I?ve got my shot of endorphins writing about PowerShell last week (damn, it?s nice to be able to code again!), and I got pretty determined on making it usable and achieving all the goals I?ve initially envisioned. and in the process build a usable tool and a library of scripts that people can use either directly or to modify to meet their needs.

The goal for this week: Context Scripts

Context scripts are the first step to break the scripting out of the admin realm and into the editor?s space. Those scripts will still be written by admins and developers but the goal is for them to be usable by the authors. The goal for those scripts can be as trivial as e.g. syndicating all the great functionality little plugins like this Unpublish button by Ted in one place and then mix and match them to your liking.

Some of the important bits:

  • Context scripts are something that is visible to users on ?Scripts? page.
  • Scripts can be exposed to everyone or just the groups of your liking? you define it in the script.
  • Scripts are grouped in collections that are defined in *.psepi files that you drop into your application folder

How do I define a script collection?

Read the rest of this article »

CMS UX – give the content some thought!

One of the many things we debate constantly at Cognifide is how to improve the user experience. How to make editor’s life easier, how to simplify the common everyday tasks, what can be automated, and simply how to make our customer smile a little when they use our projects.

For that to work, apart from the overall big blocks to be in place and working seamlessly (which is the absolute minimum required) – you need to be VERY attentive to details.

What happens when user enters a place in the system where they are not usually required to work, are they properly guided? What do they see if they click on a little link somewhere in the corner? Does every image has an Alt text attached to it? Do your buttons have tooltips? Do the users have alternate views on all content? Do the system communicates abnormal states in a descriptive way and guides the user towards the solution? Is the UI logically laid out? Did you REALLY think what property should go on which tab? Have you setup the property names in a way that they make sense to non programmer? Do they have descriptions?

Part of the job we do is help sometimes troubled EPiServer customers get their solution built elsewhere or in-house to work, and I seem to be noticing some patterns which we have addressed in Cognifide as being bad practices. Many of those stem from the lack of research of the content served being done in the discovery phase.

Discovery phase? What’s that?

It seems that a great deal of projects does not seem to be well thought out in many aspects. When you look at the solution, It feels like a developer just got some templates and ran with them. Once the front end matches what the html templates outline, the solution is pushed to production and forgotten by the design/development agency and the poor customer is struggling with it for years trying to improve the ever degrading performance and fighting the CMS UI that’s been thrown together in a rush. Possibly aggravating in the process and rebuilding it again and again.

You need to realize that once your site goes to production, the trouble begins for your customer, not ends.

Understand your content, please

A very basic tendency for a lot of them is storing all the content of one type under a single tree node. or a very basic hierarchy. But:

  • What is the volume of content in the start?
  • Have you talked to the client about the maintenance of the content?
  • How do they plan to store older content?
    • Are they archiving it?
    • Do they plan to serve it to general while archived?
    • Is the old content actively browsed on the CMS side?
  • What is the volume increase over time?
  • What is the profile of the content? Is it a catalogue? Chronological news library?
  • Is there a taxonomy in place?
  • How often and which content is being modified?

EPiServer shows pages in a tree, and while we have observed the CMS performance improving over time there are some basic scenarios that the hierarchy structure will never be able to deal with efficiently if not well thought out.

So your potential edge case scenarios might be that the customer has:

  1. 10 000 articles that need to be migrated for the site to go live, but they only plan to add 2-3 a month,
  2. They might be starting fresh but they plan to add 20 to 30 articles a day!

How do you deal with those?

Obviously the worst thing you can do is to put them all under the “Articles” node. The customer will be frustrated to no end! Both you and the CMS reputation gets damaged while they try to do any basic thing in the CMS.

In the first case you need to work with the client to dissolve them into the hierarchy that’s granular enough to leave you with the 20-50 articles per node tops. Dividing it into 10 categories of roughly 1000 items won’t do! If the page names are meaningful, you may attempt to create a structure based on the first and second letter of the articles. This works best for directories of people or places.

The second case is probably going to happen to you when you will be working with any kind of news site, be that intranet of sorts or a news agency, TV portal or an information broker. In which case, it seems to be making the most sense to put the content into a chronological structure. Have a node for each year, month (and day if needed) there is an article.

AUTOMATE!

When a user writes an article that is supposed to fit into your content plan, move it into the proper node automatically. In the first case, move it to the proper category or based on the page title move it around to the proper place in the catalogue. In the chronological plan, move it to the day or month node upon creation. If a new node needs to be created for it – that’s your responsibility to do it. Organize the content for the user or you will be in a world of pain sooner than you expect!

Those tasks are easily automated by hooking into the page save and create events. Your customer will love you for it.

Naturally you don’t necessarily have to have a single plan on a given site. A site can have both branches with news-like plan and directory-like sub trees.

The base line is – you need to plan it ahead, and you need to learn about the content profile.

Distinction with a difference

You need to realize the distinction between the content and the presentation of it. Don’t try to cram the content into the browsing structure. Separation of concerns should not be limited to code organization. Separate concerns in the content structure as well.

Have the user trip designed around the best SEO practices. The hub pages should make sense from the visitor’s perspective. DON’T try to put your articles under it just because the hub page browses them. this may work for a little tiny sites but then again, those are not the sites that you will run into troubles because it basically means your customer is barely ever touching them.

Have your content stored in a separate branch and reach out for it with a tag set that is related to the hub page.

Build a logical taxonomy and stick to it!

That basically means – treat your content equally no matter where it’s stored – preferably having it tagged with a high performance framework like the Faceted Navigation Framework published by Cognifide some time ago or make it Lucene.Net based. You can try to use the EPiServer built in categories. We have had limited success with it and the performance of FindPagesWithCriteria (which we have effectively banned from our arsenal) but I was told the performance in that front have improved greatly in the latest EPiServer CMS release.

Regardless – don’t rely on the page hierarchy structure to select the content, it doesn’t matter where it is, the metadata is what should be driving the content stream. You can hand pick your navigation links, fine, but article lists, news, announcements, events, treat it as a single content stream. Use a taxonomy to divide it into categories and you will be much happier in the long run as you will gain much more flexibility to reorganize the structure and move the content around when its profile changes later.

Using the EPiServer CMS for nearly 4 years now, those are the basic principles we have come to establish.

I wonder what are your practices? Where do you think our practices do or do not fit your design philosophy? Is there anything we could do better? Do you have practices regarding content planning? How do you analyze the content to make the best plan for it?

Posted in Best Practices, CMS UX, EPiServer, Faceted Navigation, Web applications
1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 5.00 out of 5)
Loading...
| 3 Comments »

CMS UX ? give the content some thought!

One of the many things we debate constantly at Cognifide is how to improve the user experience. How to make editor?s life easier, how to simplify the common everyday tasks, what can be automated, and simply how to make our customer smile a little when they use our projects.

For that to work, apart from the overall big blocks to be in place and working seamlessly (which is the absolute minimum required) ? you need to be VERY attentive to details.

What happens when user enters a place in the system where they are not usually required to work, are they properly guided? What do they see if they click on a little link somewhere in the corner? Does every image has an Alt text attached to it? Do your buttons have tooltips? Do the users have alternate views on all content? Do the system communicates abnormal states in a descriptive way and guides the user towards the solution? Is the UI logically laid out? Did you REALLY think what property should go on which tab? Have you setup the property names in a way that they make sense to non programmer? Do they have descriptions?

Part of the job we do is help sometimes troubled EPiServer customers get their solution built elsewhere or in-house to work, and I seem to be noticing some patterns which we have addressed in Cognifide as being bad practices. Many of those stem from the lack of research of the content served being done in the discovery phase.

Discovery phase? What?s that?

It seems that a great deal of projects does not seem to be well thought out in many aspects. When you look at the solution, It feels like a developer just got some templates and ran with them. Once the front end matches what the html templates outline, the solution is pushed to production and forgotten by the design/development agency and the poor customer is struggling with it for years trying to improve the ever degrading performance and fighting the CMS UI that?s been thrown together in a rush. Possibly aggravating in the process and rebuilding it again and again.

You need to realize that once your site goes to production, the trouble begins for your customer, not ends.

Understand your content, please

A very basic tendency for a lot of them is storing all the content of one type under a single tree node. or a very basic hierarchy. But:

  • What is the volume of content in the start?
  • Have you talked to the client about the maintenance of the content?
  • How do they plan to store older content?
    • Are they archiving it?
    • Do they plan to serve it to general while archived?
    • Is the old content actively browsed on the CMS side?
  • What is the volume increase over time?
  • What is the profile of the content? Is it a catalogue? Chronological news library?
  • Is there a taxonomy in place?
  • How often and which content is being modified?

EPiServer shows pages in a tree, and while we have observed the CMS performance improving over time there are some basic scenarios that the hierarchy structure will never be able to deal with efficiently if not well thought out.

So your potential edge case scenarios might be that the customer has:

  1. 10 000 articles that need to be migrated for the site to go live, but they only plan to add 2-3 a month,
  2. They might be starting fresh but they plan to add 20 to 30 articles a day!

How do you deal with those?

Obviously the worst thing you can do is to put them all under the ?Articles? node. The customer will be frustrated to no end! Both you and the CMS reputation gets damaged while they try to do any basic thing in the CMS.

In the first case you need to work with the client to dissolve them into the hierarchy that?s granular enough to leave you with the 20-50 articles per node tops. Dividing it into 10 categories of roughly 1000 items won?t do! If the page names are meaningful, you may attempt to create a structure based on the first and second letter of the articles. This works best for directories of people or places.

The second case is probably going to happen to you when you will be working with any kind of news site, be that intranet of sorts or a news agency, TV portal or an information broker. In which case, it seems to be making the most sense to put the content into a chronological structure. Have a node for each year, month (and day if needed) there is an article.

AUTOMATE!

When a user writes an article that is supposed to fit into your content plan, move it into the proper node automatically. In the first case, move it to the proper category or based on the page title move it around to the proper place in the catalogue. In the chronological plan, move it to the day or month node upon creation. If a new node needs to be created for it ? that?s your responsibility to do it. Organize the content for the user or you will be in a world of pain sooner than you expect!

Those tasks are easily automated by hooking into the page save and create events. Your customer will love you for it.

Naturally you don?t necessarily have to have a single plan on a given site. A site can have both branches with news-like plan and directory-like sub trees.

The base line is ? you need to plan it ahead, and you need to learn about the content profile.

Distinction with a difference

You need to realize the distinction between the content and the presentation of it. Don?t try to cram the content into the browsing structure. Separation of concerns should not be limited to code organization. Separate concerns in the content structure as well.

Have the user trip designed around the best SEO practices. The hub pages should make sense from the visitor?s perspective. DON?T try to put your articles under it just because the hub page browses them. this may work for a little tiny sites but then again, those are not the sites that you will run into troubles because it basically means your customer is barely ever touching them.

Have your content stored in a separate branch and reach out for it with a tag set that is related to the hub page.

Build a logical taxonomy and stick to it!

That basically means ? treat your content equally no matter where it?s stored ? preferably having it tagged with a high performance framework like the Faceted Navigation Framework published by Cognifide some time ago or make it Lucene.Net based. You can try to use the EPiServer built in categories. We have had limited success with it and the performance of FindPagesWithCriteria (which we have effectively banned from our arsenal) but I was told the performance in that front have improved greatly in the latest EPiServer CMS release.

Regardless ? don?t rely on the page hierarchy structure to select the content, it doesn?t matter where it is, the metadata is what should be driving the content stream. You can hand pick your navigation links, fine, but article lists, news, announcements, events, treat it as a single content stream. Use a taxonomy to divide it into categories and you will be much happier in the long run as you will gain much more flexibility to reorganize the structure and move the content around when its profile changes later.

Using the EPiServer CMS for nearly 4 years now, those are the basic principles we have come to establish.

I wonder what are your practices? Where do you think our practices do or do not fit your design philosophy? Is there anything we could do better? Do you have practices regarding content planning? How do you analyze the content to make the best plan for it?

CMS UX ? give the content some thought!

One of the many things we debate constantly at Cognifide is how to improve the user experience. How to make editor?s life easier, how to simplify the common everyday tasks, what can be automated, and simply how to make our customer smile a little when they use our projects.

For that to work, apart from the overall big blocks to be in place and working seamlessly (which is the absolute minimum required) ? you need to be VERY attentive to details.

What happens when user enters a place in the system where they are not usually required to work, are they properly guided? What do they see if they click on a little link somewhere in the corner? Does every image has an Alt text attached to it? Do your buttons have tooltips? Do the users have alternate views on all content? Do the system communicates abnormal states in a descriptive way and guides the user towards the solution? Is the UI logically laid out? Did you REALLY think what property should go on which tab? Have you setup the property names in a way that they make sense to non programmer? Do they have descriptions?

Part of the job we do is help sometimes troubled EPiServer customers get their solution built elsewhere or in-house to work, and I seem to be noticing some patterns which we have addressed in Cognifide as being bad practices. Many of those stem from the lack of research of the content served being done in the discovery phase.

Discovery phase? What?s that?

It seems that a great deal of projects does not seem to be well thought out in many aspects. When you look at the solution, It feels like a developer just got some templates and ran with them. Once the front end matches what the html templates outline, the solution is pushed to production and forgotten by the design/development agency and the poor customer is struggling with it for years trying to improve the ever degrading performance and fighting the CMS UI that?s been thrown together in a rush. Possibly aggravating in the process and rebuilding it again and again.

You need to realize that once your site goes to production, the trouble begins for your customer, not ends.

Understand your content, please

A very basic tendency for a lot of them is storing all the content of one type under a single tree node. or a very basic hierarchy. But:

  • What is the volume of content in the start?
  • Have you talked to the client about the maintenance of the content?
  • How do they plan to store older content?
    • Are they archiving it?
    • Do they plan to serve it to general while archived?
    • Is the old content actively browsed on the CMS side?
  • What is the volume increase over time?
  • What is the profile of the content? Is it a catalogue? Chronological news library?
  • Is there a taxonomy in place?
  • How often and which content is being modified?

EPiServer shows pages in a tree, and while we have observed the CMS performance improving over time there are some basic scenarios that the hierarchy structure will never be able to deal with efficiently if not well thought out.

So your potential edge case scenarios might be that the customer has:

  1. 10 000 articles that need to be migrated for the site to go live, but they only plan to add 2-3 a month,
  2. They might be starting fresh but they plan to add 20 to 30 articles a day!

How do you deal with those?

Obviously the worst thing you can do is to put them all under the ?Articles? node. The customer will be frustrated to no end! Both you and the CMS reputation gets damaged while they try to do any basic thing in the CMS.

In the first case you need to work with the client to dissolve them into the hierarchy that?s granular enough to leave you with the 20-50 articles per node tops. Dividing it into 10 categories of roughly 1000 items won?t do! If the page names are meaningful, you may attempt to create a structure based on the first and second letter of the articles. This works best for directories of people or places.

The second case is probably going to happen to you when you will be working with any kind of news site, be that intranet of sorts or a news agency, TV portal or an information broker. In which case, it seems to be making the most sense to put the content into a chronological structure. Have a node for each year, month (and day if needed) there is an article.

AUTOMATE!

When a user writes an article that is supposed to fit into your content plan, move it into the proper node automatically. In the first case, move it to the proper category or based on the page title move it around to the proper place in the catalogue. In the chronological plan, move it to the day or month node upon creation. If a new node needs to be created for it ? that?s your responsibility to do it. Organize the content for the user or you will be in a world of pain sooner than you expect!

Those tasks are easily automated by hooking into the page save and create events. Your customer will love you for it.

Naturally you don?t necessarily have to have a single plan on a given site. A site can have both branches with news-like plan and directory-like sub trees.

The base line is ? you need to plan it ahead, and you need to learn about the content profile.

Distinction with a difference

You need to realize the distinction between the content and the presentation of it. Don?t try to cram the content into the browsing structure. Separation of concerns should not be limited to code organization. Separate concerns in the content structure as well.

Have the user trip designed around the best SEO practices. The hub pages should make sense from the visitor?s perspective. DON?T try to put your articles under it just because the hub page browses them. this may work for a little tiny sites but then again, those are not the sites that you will run into troubles because it basically means your customer is barely ever touching them.

Have your content stored in a separate branch and reach out for it with a tag set that is related to the hub page.

Build a logical taxonomy and stick to it!

That basically means ? treat your content equally no matter where it?s stored ? preferably having it tagged with a high performance framework like the Faceted Navigation Framework published by Cognifide some time ago or make it Lucene.Net based. You can try to use the EPiServer built in categories. We have had limited success with it and the performance of FindPagesWithCriteria (which we have effectively banned from our arsenal) but I was told the performance in that front have improved greatly in the latest EPiServer CMS release.

Regardless ? don?t rely on the page hierarchy structure to select the content, it doesn?t matter where it is, the metadata is what should be driving the content stream. You can hand pick your navigation links, fine, but article lists, news, announcements, events, treat it as a single content stream. Use a taxonomy to divide it into categories and you will be much happier in the long run as you will gain much more flexibility to reorganize the structure and move the content around when its profile changes later.

Using the EPiServer CMS for nearly 4 years now, those are the basic principles we have come to establish.

I wonder what are your practices? Where do you think our practices do or do not fit your design philosophy? Is there anything we could do better? Do you have practices regarding content planning? How do you analyze the content to make the best plan for it?