A story of a saved EPiServer site

I meant to write this a long time ago but somehow that never really got out of the room. Following is a narrative of an EPiServer site that was on and off the net for half a year or longer and what I?ve learned in the process.

<day id=?1? />

We?ve gathered all the data from the client ? we know they have implemented custom ?skins? (basically controls that brand the mini sites based on under which domain the site is being displayed. Quite a cool solution. Also since they were struggling with the speed they have implemented a custom mini taxonomy based on Lucene.net to speed things up. Yet the site is terribly slow and keeps showing the familiar (for a developer) ?Application is busy under initialization? from time to time.

Read the rest of this article »

The Console of Mass Content Management

This one definitely took more time than I initially expected, and before I devote even more to it I would very much like to hear your opinion. Do you find it useful? Which way should the development be going? But first things first?

Have you ever found yourself:

  • having to make a mundane change to a large number of pages?
  • in need of getting statistics on page properties or page type usages?
  • being curious of e.g. what?s the oldest page on your site?
  • having to copy or move a large number of files from one folder to another or between versioning and non versioning virtual path providers?
  • renaming or deleting files in your file store en-masse?

If the answer to any of those (and more) is a ?yes!?, I believe you might find my little plugin useful.

The idea is to create a scripting environment to work with EPiServer on a more granular level than the existing PowerShell SnapIn API enables us currently. Manipulate not just sites, but files and pages on a large scale or perform statistical analysis of your content using  a familiar and well documented query language.

PowerShell1

The PowerShell console for EPiServer provides you with two abstractions to work with:

Virtual Path Provider Drive

With the console you can browse the VPP and perform a number of file operations just like you were doing it in a regular PowerShell console on a regular disk drive. Especially?

  • move files between Virtual path providers
  • move, copy or rename files and folders

Known limitations:

  • You cannot load files from disk directly onto a VPP and vice versa (this however can be overcome by mapping the path you want to migrate into your CMS as a native path provider and copying from that).
  • some actions might not respect or might unintentionally force recursive operation.
  • the console might blow up unexpectedly (What do you mean crippled? I got all five fingers! Three on this hand, two on the other one!)

I think you might find it quite useful offloading files from versioning VPP onto a Native VPP once you decide that you want to access the content of the files outside the CMS. Or pulling your files into the Database VPP (available for download from EPiCode).

Page Store Drives

The console will map all your CMS page roots as drives based on the site name (site name should not have space in it for the current version to work). Now this one? the sky is the limit!

The items that the drive exposes are fully functional PageData?s, additionally ? for your scripting convenience all page properties are mapped so that you can access them like they were regular POCO properties.

By far this is the coolest little toy I?ve recently played with ? I would strongly advised that you look into the samples and put your imagination to work!

PowerShell2

You can create statistics, modify pages based on regular expressions, filter, delete, rename, move around?

PowerShell3

Naturally the Obligatory disclaimer is that you use the tool at your own responsibility. Make backups, test your script on staging before doing anything. Heck! Don?t use it on production at all yet (!) ? it?s very much an alpha and a technology demo.

[Download & Enjoy]

How to install?

Extract the DLL form the ZIP file into the BIN folder of your web application to install the plugin. Remove the DLL to uninstall it.

All constructive feedback appreciated!

CMS UX – give the content some thought!

One of the many things we debate constantly at Cognifide is how to improve the user experience. How to make editor’s life easier, how to simplify the common everyday tasks, what can be automated, and simply how to make our customer smile a little when they use our projects.

For that to work, apart from the overall big blocks to be in place and working seamlessly (which is the absolute minimum required) – you need to be VERY attentive to details.

What happens when user enters a place in the system where they are not usually required to work, are they properly guided? What do they see if they click on a little link somewhere in the corner? Does every image has an Alt text attached to it? Do your buttons have tooltips? Do the users have alternate views on all content? Do the system communicates abnormal states in a descriptive way and guides the user towards the solution? Is the UI logically laid out? Did you REALLY think what property should go on which tab? Have you setup the property names in a way that they make sense to non programmer? Do they have descriptions?

Part of the job we do is help sometimes troubled EPiServer customers get their solution built elsewhere or in-house to work, and I seem to be noticing some patterns which we have addressed in Cognifide as being bad practices. Many of those stem from the lack of research of the content served being done in the discovery phase.

Discovery phase? What’s that?

It seems that a great deal of projects does not seem to be well thought out in many aspects. When you look at the solution, It feels like a developer just got some templates and ran with them. Once the front end matches what the html templates outline, the solution is pushed to production and forgotten by the design/development agency and the poor customer is struggling with it for years trying to improve the ever degrading performance and fighting the CMS UI that’s been thrown together in a rush. Possibly aggravating in the process and rebuilding it again and again.

You need to realize that once your site goes to production, the trouble begins for your customer, not ends.

Understand your content, please

A very basic tendency for a lot of them is storing all the content of one type under a single tree node. or a very basic hierarchy. But:

  • What is the volume of content in the start?
  • Have you talked to the client about the maintenance of the content?
  • How do they plan to store older content?
    • Are they archiving it?
    • Do they plan to serve it to general while archived?
    • Is the old content actively browsed on the CMS side?
  • What is the volume increase over time?
  • What is the profile of the content? Is it a catalogue? Chronological news library?
  • Is there a taxonomy in place?
  • How often and which content is being modified?

EPiServer shows pages in a tree, and while we have observed the CMS performance improving over time there are some basic scenarios that the hierarchy structure will never be able to deal with efficiently if not well thought out.

So your potential edge case scenarios might be that the customer has:

  1. 10 000 articles that need to be migrated for the site to go live, but they only plan to add 2-3 a month,
  2. They might be starting fresh but they plan to add 20 to 30 articles a day!

How do you deal with those?

Obviously the worst thing you can do is to put them all under the “Articles” node. The customer will be frustrated to no end! Both you and the CMS reputation gets damaged while they try to do any basic thing in the CMS.

In the first case you need to work with the client to dissolve them into the hierarchy that’s granular enough to leave you with the 20-50 articles per node tops. Dividing it into 10 categories of roughly 1000 items won’t do! If the page names are meaningful, you may attempt to create a structure based on the first and second letter of the articles. This works best for directories of people or places.

The second case is probably going to happen to you when you will be working with any kind of news site, be that intranet of sorts or a news agency, TV portal or an information broker. In which case, it seems to be making the most sense to put the content into a chronological structure. Have a node for each year, month (and day if needed) there is an article.

AUTOMATE!

When a user writes an article that is supposed to fit into your content plan, move it into the proper node automatically. In the first case, move it to the proper category or based on the page title move it around to the proper place in the catalogue. In the chronological plan, move it to the day or month node upon creation. If a new node needs to be created for it – that’s your responsibility to do it. Organize the content for the user or you will be in a world of pain sooner than you expect!

Those tasks are easily automated by hooking into the page save and create events. Your customer will love you for it.

Naturally you don’t necessarily have to have a single plan on a given site. A site can have both branches with news-like plan and directory-like sub trees.

The base line is – you need to plan it ahead, and you need to learn about the content profile.

Distinction with a difference

You need to realize the distinction between the content and the presentation of it. Don’t try to cram the content into the browsing structure. Separation of concerns should not be limited to code organization. Separate concerns in the content structure as well.

Have the user trip designed around the best SEO practices. The hub pages should make sense from the visitor’s perspective. DON’T try to put your articles under it just because the hub page browses them. this may work for a little tiny sites but then again, those are not the sites that you will run into troubles because it basically means your customer is barely ever touching them.

Have your content stored in a separate branch and reach out for it with a tag set that is related to the hub page.

Build a logical taxonomy and stick to it!

That basically means – treat your content equally no matter where it’s stored – preferably having it tagged with a high performance framework like the Faceted Navigation Framework published by Cognifide some time ago or make it Lucene.Net based. You can try to use the EPiServer built in categories. We have had limited success with it and the performance of FindPagesWithCriteria (which we have effectively banned from our arsenal) but I was told the performance in that front have improved greatly in the latest EPiServer CMS release.

Regardless – don’t rely on the page hierarchy structure to select the content, it doesn’t matter where it is, the metadata is what should be driving the content stream. You can hand pick your navigation links, fine, but article lists, news, announcements, events, treat it as a single content stream. Use a taxonomy to divide it into categories and you will be much happier in the long run as you will gain much more flexibility to reorganize the structure and move the content around when its profile changes later.

Using the EPiServer CMS for nearly 4 years now, those are the basic principles we have come to establish.

I wonder what are your practices? Where do you think our practices do or do not fit your design philosophy? Is there anything we could do better? Do you have practices regarding content planning? How do you analyze the content to make the best plan for it?

Posted in Best Practices, CMS UX, EPiServer, Faceted Navigation, Web applications
1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 5.00 out of 5)
Loading...
| 3 Comments »

CMS UX ? give the content some thought!

One of the many things we debate constantly at Cognifide is how to improve the user experience. How to make editor?s life easier, how to simplify the common everyday tasks, what can be automated, and simply how to make our customer smile a little when they use our projects.

For that to work, apart from the overall big blocks to be in place and working seamlessly (which is the absolute minimum required) ? you need to be VERY attentive to details.

What happens when user enters a place in the system where they are not usually required to work, are they properly guided? What do they see if they click on a little link somewhere in the corner? Does every image has an Alt text attached to it? Do your buttons have tooltips? Do the users have alternate views on all content? Do the system communicates abnormal states in a descriptive way and guides the user towards the solution? Is the UI logically laid out? Did you REALLY think what property should go on which tab? Have you setup the property names in a way that they make sense to non programmer? Do they have descriptions?

Part of the job we do is help sometimes troubled EPiServer customers get their solution built elsewhere or in-house to work, and I seem to be noticing some patterns which we have addressed in Cognifide as being bad practices. Many of those stem from the lack of research of the content served being done in the discovery phase.

Discovery phase? What?s that?

It seems that a great deal of projects does not seem to be well thought out in many aspects. When you look at the solution, It feels like a developer just got some templates and ran with them. Once the front end matches what the html templates outline, the solution is pushed to production and forgotten by the design/development agency and the poor customer is struggling with it for years trying to improve the ever degrading performance and fighting the CMS UI that?s been thrown together in a rush. Possibly aggravating in the process and rebuilding it again and again.

You need to realize that once your site goes to production, the trouble begins for your customer, not ends.

Understand your content, please

A very basic tendency for a lot of them is storing all the content of one type under a single tree node. or a very basic hierarchy. But:

  • What is the volume of content in the start?
  • Have you talked to the client about the maintenance of the content?
  • How do they plan to store older content?
    • Are they archiving it?
    • Do they plan to serve it to general while archived?
    • Is the old content actively browsed on the CMS side?
  • What is the volume increase over time?
  • What is the profile of the content? Is it a catalogue? Chronological news library?
  • Is there a taxonomy in place?
  • How often and which content is being modified?

EPiServer shows pages in a tree, and while we have observed the CMS performance improving over time there are some basic scenarios that the hierarchy structure will never be able to deal with efficiently if not well thought out.

So your potential edge case scenarios might be that the customer has:

  1. 10 000 articles that need to be migrated for the site to go live, but they only plan to add 2-3 a month,
  2. They might be starting fresh but they plan to add 20 to 30 articles a day!

How do you deal with those?

Obviously the worst thing you can do is to put them all under the ?Articles? node. The customer will be frustrated to no end! Both you and the CMS reputation gets damaged while they try to do any basic thing in the CMS.

In the first case you need to work with the client to dissolve them into the hierarchy that?s granular enough to leave you with the 20-50 articles per node tops. Dividing it into 10 categories of roughly 1000 items won?t do! If the page names are meaningful, you may attempt to create a structure based on the first and second letter of the articles. This works best for directories of people or places.

The second case is probably going to happen to you when you will be working with any kind of news site, be that intranet of sorts or a news agency, TV portal or an information broker. In which case, it seems to be making the most sense to put the content into a chronological structure. Have a node for each year, month (and day if needed) there is an article.

AUTOMATE!

When a user writes an article that is supposed to fit into your content plan, move it into the proper node automatically. In the first case, move it to the proper category or based on the page title move it around to the proper place in the catalogue. In the chronological plan, move it to the day or month node upon creation. If a new node needs to be created for it ? that?s your responsibility to do it. Organize the content for the user or you will be in a world of pain sooner than you expect!

Those tasks are easily automated by hooking into the page save and create events. Your customer will love you for it.

Naturally you don?t necessarily have to have a single plan on a given site. A site can have both branches with news-like plan and directory-like sub trees.

The base line is ? you need to plan it ahead, and you need to learn about the content profile.

Distinction with a difference

You need to realize the distinction between the content and the presentation of it. Don?t try to cram the content into the browsing structure. Separation of concerns should not be limited to code organization. Separate concerns in the content structure as well.

Have the user trip designed around the best SEO practices. The hub pages should make sense from the visitor?s perspective. DON?T try to put your articles under it just because the hub page browses them. this may work for a little tiny sites but then again, those are not the sites that you will run into troubles because it basically means your customer is barely ever touching them.

Have your content stored in a separate branch and reach out for it with a tag set that is related to the hub page.

Build a logical taxonomy and stick to it!

That basically means ? treat your content equally no matter where it?s stored ? preferably having it tagged with a high performance framework like the Faceted Navigation Framework published by Cognifide some time ago or make it Lucene.Net based. You can try to use the EPiServer built in categories. We have had limited success with it and the performance of FindPagesWithCriteria (which we have effectively banned from our arsenal) but I was told the performance in that front have improved greatly in the latest EPiServer CMS release.

Regardless ? don?t rely on the page hierarchy structure to select the content, it doesn?t matter where it is, the metadata is what should be driving the content stream. You can hand pick your navigation links, fine, but article lists, news, announcements, events, treat it as a single content stream. Use a taxonomy to divide it into categories and you will be much happier in the long run as you will gain much more flexibility to reorganize the structure and move the content around when its profile changes later.

Using the EPiServer CMS for nearly 4 years now, those are the basic principles we have come to establish.

I wonder what are your practices? Where do you think our practices do or do not fit your design philosophy? Is there anything we could do better? Do you have practices regarding content planning? How do you analyze the content to make the best plan for it?

CMS UX ? give the content some thought!

One of the many things we debate constantly at Cognifide is how to improve the user experience. How to make editor?s life easier, how to simplify the common everyday tasks, what can be automated, and simply how to make our customer smile a little when they use our projects.

For that to work, apart from the overall big blocks to be in place and working seamlessly (which is the absolute minimum required) ? you need to be VERY attentive to details.

What happens when user enters a place in the system where they are not usually required to work, are they properly guided? What do they see if they click on a little link somewhere in the corner? Does every image has an Alt text attached to it? Do your buttons have tooltips? Do the users have alternate views on all content? Do the system communicates abnormal states in a descriptive way and guides the user towards the solution? Is the UI logically laid out? Did you REALLY think what property should go on which tab? Have you setup the property names in a way that they make sense to non programmer? Do they have descriptions?

Part of the job we do is help sometimes troubled EPiServer customers get their solution built elsewhere or in-house to work, and I seem to be noticing some patterns which we have addressed in Cognifide as being bad practices. Many of those stem from the lack of research of the content served being done in the discovery phase.

Discovery phase? What?s that?

It seems that a great deal of projects does not seem to be well thought out in many aspects. When you look at the solution, It feels like a developer just got some templates and ran with them. Once the front end matches what the html templates outline, the solution is pushed to production and forgotten by the design/development agency and the poor customer is struggling with it for years trying to improve the ever degrading performance and fighting the CMS UI that?s been thrown together in a rush. Possibly aggravating in the process and rebuilding it again and again.

You need to realize that once your site goes to production, the trouble begins for your customer, not ends.

Understand your content, please

A very basic tendency for a lot of them is storing all the content of one type under a single tree node. or a very basic hierarchy. But:

  • What is the volume of content in the start?
  • Have you talked to the client about the maintenance of the content?
  • How do they plan to store older content?
    • Are they archiving it?
    • Do they plan to serve it to general while archived?
    • Is the old content actively browsed on the CMS side?
  • What is the volume increase over time?
  • What is the profile of the content? Is it a catalogue? Chronological news library?
  • Is there a taxonomy in place?
  • How often and which content is being modified?

EPiServer shows pages in a tree, and while we have observed the CMS performance improving over time there are some basic scenarios that the hierarchy structure will never be able to deal with efficiently if not well thought out.

So your potential edge case scenarios might be that the customer has:

  1. 10 000 articles that need to be migrated for the site to go live, but they only plan to add 2-3 a month,
  2. They might be starting fresh but they plan to add 20 to 30 articles a day!

How do you deal with those?

Obviously the worst thing you can do is to put them all under the ?Articles? node. The customer will be frustrated to no end! Both you and the CMS reputation gets damaged while they try to do any basic thing in the CMS.

In the first case you need to work with the client to dissolve them into the hierarchy that?s granular enough to leave you with the 20-50 articles per node tops. Dividing it into 10 categories of roughly 1000 items won?t do! If the page names are meaningful, you may attempt to create a structure based on the first and second letter of the articles. This works best for directories of people or places.

The second case is probably going to happen to you when you will be working with any kind of news site, be that intranet of sorts or a news agency, TV portal or an information broker. In which case, it seems to be making the most sense to put the content into a chronological structure. Have a node for each year, month (and day if needed) there is an article.

AUTOMATE!

When a user writes an article that is supposed to fit into your content plan, move it into the proper node automatically. In the first case, move it to the proper category or based on the page title move it around to the proper place in the catalogue. In the chronological plan, move it to the day or month node upon creation. If a new node needs to be created for it ? that?s your responsibility to do it. Organize the content for the user or you will be in a world of pain sooner than you expect!

Those tasks are easily automated by hooking into the page save and create events. Your customer will love you for it.

Naturally you don?t necessarily have to have a single plan on a given site. A site can have both branches with news-like plan and directory-like sub trees.

The base line is ? you need to plan it ahead, and you need to learn about the content profile.

Distinction with a difference

You need to realize the distinction between the content and the presentation of it. Don?t try to cram the content into the browsing structure. Separation of concerns should not be limited to code organization. Separate concerns in the content structure as well.

Have the user trip designed around the best SEO practices. The hub pages should make sense from the visitor?s perspective. DON?T try to put your articles under it just because the hub page browses them. this may work for a little tiny sites but then again, those are not the sites that you will run into troubles because it basically means your customer is barely ever touching them.

Have your content stored in a separate branch and reach out for it with a tag set that is related to the hub page.

Build a logical taxonomy and stick to it!

That basically means ? treat your content equally no matter where it?s stored ? preferably having it tagged with a high performance framework like the Faceted Navigation Framework published by Cognifide some time ago or make it Lucene.Net based. You can try to use the EPiServer built in categories. We have had limited success with it and the performance of FindPagesWithCriteria (which we have effectively banned from our arsenal) but I was told the performance in that front have improved greatly in the latest EPiServer CMS release.

Regardless ? don?t rely on the page hierarchy structure to select the content, it doesn?t matter where it is, the metadata is what should be driving the content stream. You can hand pick your navigation links, fine, but article lists, news, announcements, events, treat it as a single content stream. Use a taxonomy to divide it into categories and you will be much happier in the long run as you will gain much more flexibility to reorganize the structure and move the content around when its profile changes later.

Using the EPiServer CMS for nearly 4 years now, those are the basic principles we have come to establish.

I wonder what are your practices? Where do you think our practices do or do not fit your design philosophy? Is there anything we could do better? Do you have practices regarding content planning? How do you analyze the content to make the best plan for it?

Easy Enum property for EPiServer

One of the most frequently and eagerly used programming constructs of the Microsoft.Net Framework is Enum. There are several interesting features that make it very compelling to use to for all kinds of dropdowns and checklists:

  • The bounds factor ? proper use of Enum type guarantee that the selected value will fall within the constraints of the allowed value set.
  • The ability to treat Enums as flags (and compound them into flag sets) as well as a one-of selector.
  • The ease of use and potentially complete separation of the ?Enum value? from the underlying machine type representation that ensures the most efficient memory usage.

Surprisingly enough EPiServer as it stands right now does not have an easy facility to turn Enums into properties. To give credit where credit is due, the EPiServer framework provides a nice surrogate that mimic that behaviour to a degree. The relevant property types are:

  • PropertyAppSettingsMultiple  – which ?creates check boxes with options that are defined in the AppSettings section in web.config. The name of the property should match the key for the app setting.?
  • PropertyAppSettings  – which ?creates a drop down list with options that are defined in the AppSettings section in web.config. The name of the property should match the key for the app setting.?

You quickly realize though that the properties have some limitations that makes their use a bit less compelling:

  • The properties are not strongly typed
  • The property entry in AppSettings section has to have the name that matches the property name on the page.
  • It?s rather poorly documented, Other than relating to this blog entry or Erik?s post documenting it I could not find any other examples on how to use them. (but then again, who needs docs really when we have Reflector)
  • You cannot have the very same property duplicated on the page since you can only have a single property of a given name per page. So you need to have multiple entries in AppSettings that match the name of each of those properties on your pages. I know? semantics but still?
  • You are working on strings rather than enums (Did i mention it?s not type safe?)
  • The values in the AppSettings are stored in a somewhat DLS-y manner (consecutive options are separated from each other with the ?|? character, the name and the value are separated with a ?;?, for example: <add key = "RegionId" value="First Option;Option1|Default Option;Option2|Disabled Option;Option3" /> ) and I have had on an occasion entered a string there that caused the server to crash.
  • The values are not translatable, or at least I could not find how to do it and any Reflector digging rendered no results either.

Read the rest of this article »

Aparently I have written something on that note before for CMS 4 and it looks like someone still needs it as I got a request for an updated version for it a couple of days ago. So here we go:

for the most part the syntax for the call is equivalent to what is was before so go to my previous article regarding that (check out the old article for details). What I?ve added this time around is:

  • the @PropertyName can be declared as ?%? if you want to look in all property names
  • @PropertyType can be ?1 if you want to look in all property types otherwise you need to specify type id (this has changed from type name before due to database schema changes)
  • additionally this version of the stored proc will only look in the Master language Branch, so it will work for the single language pages and for multi-language but for language agnostic properties. (should you require the language to be variable the change is pretty simple ? I can send you the updated version by email.
/****** Object:  StoredProcedure [dbo].[PagedSearch]    Script Date: 07/07/2009 12:18:10 ******/
IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[PagedSearch]') AND type in (N'P', N'PC'))
DROP PROCEDURE [dbo].[PagedSearch]
GO

CREATE Procedure PagedSearch
    @Condition varchar(1024),
    @PropertyName varchar(1024),
    @PropertyType int,
    @PageSize int,
    @PageNumber int,
    @Offset int
AS
BEGIN

    DECLARE @RowStart int
    DECLARE @RowEnd int

    SET @RowStart = @PageSize * @PageNumber + @Offset;
    SET @RowEnd = @RowStart + @PageSize + @Offset;

    WITH PageRefs AS
        (SELECT page.pkID as PageId,
            ROW_NUMBER() OVER (ORDER BY pageLang.StartPublish DESC) as RowNumber
            FROM tblPage page, tblProperty propValue, tblPageDefinition propDef, tblPageLanguage pageLang
            WHERE page.pkID = propValue.fkPageID
                AND page.fkMasterLanguageBranchID = pageLang.fkLanguageBranchID
                AND page.pkID = pageLang.fkPageID
                AND propValue.fkPageDefinitionID = propDef.pkID
                AND (@propertyType = -1 or propDef.fkPageDefinitionTypeID = @propertyType) -- is proper type
                AND propDef.Searchable = 1 -- the property is searchable
                AND propValue.String like @Condition -- contains facets
                AND propDef.[Name] like @PropertyName) -- property of proper name
    SELECT PageId
        FROM PageRefs
        WHERE (RowNumber Between @RowStart and @RowEnd) or (@PageSize = 0);
END
GO

However… looking how the schema has changed over time, I am not convinced this approach is really the best one for someone who is not prepared to deal with the changes (e.g. you better be able to change the stored procedure based on the schema changes – or bribe me with pizza and beers for updates :) ).

Additionally this procedure only searches for properties that store their value in Short string field. To make it look into long string you need to Change the highlighted line to.

AND (propValue.LongString like @Condition)

or alternatively to look in both change it to:

AND ((propValue.String like @Condition) or (propValue.LongString like @Condition))

Enjoy!

Eric Evans on Domain Driven Design

Eric Evans of Domain Driven Design will be giving a talk at PUT just before Eclipse DemoCamp, June 24th @ 18:00.  Cognifide are sponsoring the event and it would be a great chance for you guys to dust of those design skills.  Domain Driven Design is going to be course that we hope to run out of the Cognifide Office late this year with the a little help from our friends at Skills Matter.

Register for the Domain Driven Design that is platform-neutral and in fact, Eric gives .NET and Java versions of the courses.

More on Eric & DDD:
http://skillsmatter.com/course/design-architecture/domain-driven-design
http://skillsmatter.com/podcast/design-architecture/domain-driven-design
http://www.infoq.com/interviews/domain-driven-design-eric-evans

SoakIE – a Web Server Stress Tool with a twist

Last week or so ago a couple of friends in another project in Cognifide has run into a wall while trying to load test their website. the problem was as follows: The website is highly AJAX based – the page merely loads a stub in the initial request but then loads the rest of its data in a dynamic matter therefore a traditional web testing tools are fairly useless. What they tried was to setup a number of Selenium clients to pound the server, but that turned out to be fairly challenging to the machine doing the testing. It was not possible to set up more than 10 clients on a fairly strong machine.

Also there are other limitations like time to wait for the server to timeout and time between clicks, which I am not sure the tool allowed them to adjust. Talking to them I recalled a tool for grabbing website thumbnails long time ago. one way for them would be to to make a batch file with it. The tool would grab the sites’ thumbnail and stress it, but they would still have to setup a number of clients. Also it creates and tears down an instance of IE every time, making it’s not optimal for that task.

So a couple of evenings later (and a few back-s and forth-s during the testing sessions) out comes SoakIE:

SoakIETest

Read the rest of this article »

SoakIE ? a Web Server Stress Tool with a twist

Last week or so ago a couple of friends in another project in Cognifide has run into a wall while trying to load test their website. the problem was as follows: The website is highly AJAX based ? the page merely loads a stub in the initial request but then loads the rest of its data in a dynamic matter therefore a traditional web testing tools are fairly useless. What they tried was to setup a number of Selenium clients to pound the server, but that turned out to be fairly challenging to the machine doing the testing. It was not possible to set up more than 10 clients on a fairly strong machine.

Also there are other limitations like time to wait for the server to timeout and time between clicks, which I am not sure the tool allowed them to adjust. Talking to them I recalled a tool for grabbing website thumbnails long time ago. one way for them would be to to make a batch file with it. The tool would grab the sites? thumbnail and stress it, but they would still have to setup a number of clients. Also it creates and tears down an instance of IE every time, making it?s not optimal for that task.

So a couple of evenings later (and a few back-s and forth-s during the testing sessions) out comes SoakIE:

SoakIETest

Read the rest of this article »