Friday, November 4, 2011

A homebrew ATSC multi-room PVR project

I'm not involved in technical matters much at work right now and thus I've fallen back to updating my own home on my spare time. This is a six-month project that will consist of:

Canada has switched to ATSC on September 1st, so my main objective is to cut cable TV and modernize my 2 current standalone cable-company-supported-PVRs which have an interface that dates back to 2001.

In a nutshell, I'll do this in the following weeks:
  • No more lightning in the house: Installing and grounding an exterior OTA antenna
  • Thank you foxconn and the MediaPortal Team: Building and configuring a "budget" Windows 7 TV Server with MediaPortal
  • Rsync now, robocopy later: Dismantling my FreeNAS-based NAS to consolidate data on the TV Server.
  • PXE for the masses: Deploying 2-3 MediaPortal client nettops using PXE and OpenWRT

I've got some of the pieces in place. The Win7 server is running and I should be ready to test MediaPortal soon. Why Win7 and not Win2008? The reason is that my ATSC card (an AverTVHD Duet) doesn't have drivers for 2008, and I don't need a domain controller for my house anyway. For the nettops, I have one in hand already, and wish to try installing them using PXE (just for the kicks).

I'll try to post some pictures and details over the coming weeks. They should roughly follow the 4 steps above.

Olivier

Tuesday, September 27, 2011

Yes, I'm still alive.

Not many posts lately huh?

My older blog, Technocrat-UX, consisted of a way for me to document quirks and techniques related to HP-UX, BladeSystems, and some other technologies. Technocrat-UX enjoyed some success as these were niche, but relevant, subjects.

That model doesn't fit well with The ex-sysadmin where I originally had the intention of documenting my new job as a systems architect. The main problem when designing IT architectures is that the ideas and diagrams that result of my efforts are not generic and reusable enough, thus not interesting. Furthermore, in a security perspective, a lot of work needs to be done to obfuscate the information - any information - before it is released. I can't, for instance, publish a networking topology just like that to the public.

Up until recently I did, however, have the intention of writing a paper and presentation documenting a reference architecture for IED event and measure collection following my 18 month experience with Cooper's products. But due to some restrictions, that has not been possible yet.

In the mean time I'm keeping the blog going with posts that I *think* could be interesting to sysadmins, architects and... ex-sysadmins.

O.

Tuesday, August 30, 2011

HP's Power Advisor

This morning I had to use HP's Power Advisor to estimate the load of small servers I need to deploy (DL360G7s). I remember using an older tool some years ago but this new one is much better. It's available here:
http://h18004.www1.hp.com/products/solutions/power/advisor-online/HPPowerAdvisor.html



Friday, August 26, 2011

Dealing with MFT servers: a systems architect versus systems administrator love story

In my past life as a system administrator, I once had to build from a ground up a secure MFT (managed file transfer) server. I've pulled it off by using the HP-UX infrastructure I was comfortable with, and built something from the ground up using OpenSSH. You wouldn't believe, however, how much tweaking had to be done to have the user accounts (which were stored in /etc/passwd and /etc/shadow) synchronized reliabily in a clustered system spanning two sites. It took me a few days to make sure everything was correct.

Now it is time to do it again at a new place. I have to design another highly-available MFT server that will be wedged between two DMZs, and besides supporting SFTP I want it to have an HTTPS-based, "drop box" feature for end-users do be able to upload files easily without needing an SFTP client. Oh, and by the way, I need it to be able to authenticate users with a Windows domain this time.

If I was still a sysadmin, I'd have to extend the first solution further by adding an Apache HTTPD server and some open-source file upload solution. Then, I'd have to find a solution *BOTH* for Apache and OpenSSH to authenticate users. OpenSSH would probably need to rely on PAM, and for Apache I don't have a clue. Yet no problema; I would just shrug and say I can do that, then spend a few days tying everything up. The end.

But, as a system architect, things don't work this way.

Why? Because I have to assume that there is no guarantee the sysadmin who will have to do the grunt work of building this up will be willing, or have enough experience, to install and configure a custom solution. And assuming he/she is willing to do it, I have to consider that each man-hour counts for serious dough within the frame of a project. With custom hacks like this, these can sum up to a lot of hours depending on whose desk the work falls on.

Therefore, I did what system architects do: I tried to pick a turnkey solution, and it will have to be shoved down the IT team's throat.

I always hated this when it happened before. Picture this: The architect goes on a golf course or whatever, and randomly picks a solution based on bullet points and checklist tables. Then the IT operations guy has to take whatever crappy, slow and expensive "enterprise" software the architect purchased at a pharaonic price, and make it work satisfactorily to fulfill a business need. More often than not, such lame software ends up in the garbage bin with the IT team developing its own in-house solution to patch things up.

As I've been on that side of the fence before, I try to do things differently to prevent this from happening. So when I technically can, I actually try out software before choosing it, when it's not too daunting for me to do so.

So back to the MFT. I went searching on the web and picked a market "enterprise" leader to try it out. Not only doesn't it support high availability easily, the software is clumsy and it took 30 minutes to run its installshield sequence on a Windows 2008 VM. Uninstalling took the same. And to have SFTP support, I actually had to pay a premium over the base price. Not good.

Few other vendors had solutions that seemed serious enough, though. By serious I mean that they have to offer technical support, and have some agility in dealing with enteprise customers. Then a colleague of mine found out a very elegant software named JSCAPE MFT Server. Installation is a snap and it's very easy to configure. I was up and running in a few minutes. And as a bonus, its feature set is actually useful and seems to have been designed based on user requests instead of some odd crystal ball. I've been trying it this morning and up to now, it works very well.

The MFT server itself works on Windows, Linux, some Unices and Mac OS X. Installing the RPM went without any problem on CentOS 6. It is managed by a Java-based GUI that I installed on Windows -- I wasn't fond of using a thick-client when compared to a web-based administration GUI, but their GUI is efficient and interface-rich without being clumsy. No bells and whistles, and it is fine that way.

The Java-based "Server Manager" does the job efficiently

Enabling the web-based transfer option was quick and easy to do. What helps is that the software comes with a manual that, without being too detailed, provides lots of screenshots and cookbook-like procedures to configure the server quickly. It took me maybe a minute or so to enable the web server, set up LDAP-based authentication, add a dummy user, and try it out.

The resulting web service might be bland but it does the job, once again no fireworks. This is very important as it will be deployed to users who might not always be too tech-savvy.

The web-based service might be simple, that's exactly what I want


As a system administrator, I would actually want to work with software like this because it's elegant. It does a few things, and it does it well. I like it when software feel natural, and everything works the first time without a glitch. As in every software, I'm sure there are some bugs somewhere, but it sure is a good start.

Is JSCAPE MFT Server, the iPod of MFT's? I'd say it's not far from it. Chances are that if my project is greenlit, I'll be the first in line to purchase it. Whoever wrote this, good work!

O.

Friday, August 12, 2011

Interesting thread on Slashdot

Sysadmins and developers aren't the same, but they both share a strong technical background.

I suggest you read this thread. The first comment named "Stay Put" might be hilarious, it makes a lot of sense.

http://ask.slashdot.org/story/11/08/12/1433239/Ask-Slashdot-Am-I-Too-Old-To-Learn-New-Programming-Languages

The ex-sysadmin I am often asks himself if moving up was a good decision. It turns out it might be after all.


Monday, August 8, 2011

A presentation on IMS and PI might be coming

After a hiatus in 2010 due to a career change, it's now time to start writing papers and building presentations again. I'll be submitting a paper for Cooper EAS's 2011 Smart Grid conference which will consist of my experience with a major Yukon IMS deployment I've been involved with as an IT architect. I'll also explain how we used the SMP gateway to link substations to the OSISoft PI data historian in order to collect critical data.

I'm not in academia so my work is in no way scientific. Furthermore, I'm part of a huge team which counts a fair share of people from IT operations, control engineering and electrical engineering, so my view is mostly IT-centric.

There are many air travel restrictions at the office so I hope I'll be able to make it. The worst case scenario would be driving from Montreal to Minneapolis for which, if it's any consolation, won't require a full body scan.

O.

Monday, July 4, 2011

Forcing laptop users to use only an Iron Key (and nothing else)

I need to transfer files between two networks which need to be physically isolated for a few months until a beefed up and permanent security solution becomes possible.

The easiest way to do this on a budget consists of using USB keys to transfer files between two laptops, one which is connected on the intranet, the other on the secure network. If course, the "secure" laptop must be stripped to the bone and have an up-to-date Antivirus so it can trap known viruses that are currently in the wild. That won't prevent any new virus from coming in, but there is an urgent business need to transfer these files so there is not much that can be done in the short term.

I'm currently using IronKeys to ensure the integrity of the data, and also to prevent any data theft if a key is ever stolen. However, one must "encourage" end-users to use these keys, else they might end up using whatever key they lay their hands on to prevent having to enter a password.

On Windows XP, there is no way to do USB key filtering based on the key manufacturer. IronKey has a partner named DeviceLock that they suggest, but being a commercial product, it comes with a price. There are many other endpoint security tools that can be purchased to offer similar functionality, too. In my case, I was in a deadline and had missed the opportunity acquire software and charge it to the project, so it was preferable to use something free as a stopgap measure.

This afternoon, I've been making a few tests with USBSecure. It SEEMS to be free. There is no license, but all code is published so it can be tweaked if necessary. USBSecure is simple to configure: define users that use the computer, and whitelist the device IDs that are allowed on the system. I've been making a few tests for an hour or so, and it seems to work correctly. I might come back and give more details later.

Of course, transferring files between two adjacent PCs might look clumsy. But there are a lot of (justified) restrictions on secure control networks. What is sad is that that Stuxnet worked exactly this way, by propagating using USB keys. No matter how much we try to control their usage using endpoint security software, USB keys still remain a vector of infection for secure networks. Better long-term solutions must be done to ensure that any file transferred on a secure network is, indeed, clean. I'll be working on such solutions in 2011-2012.

O.

Friday, June 17, 2011

Why hosting enterprise-level videos on YouTube is not a good idea

Last week, OSISoft sent their customers an e-mail pitching their new OSISoft Learning Channel on YouTube. Since they probably knew that many enterprise firewalls block YouTube, their communication pleaded that we should politely ask IT to authorize YouTube. They also put a reference to Buck Bard's blog post Don't be anti-social on social networking where he basically says that internal collaboration sites don't measure up to public ones like YouTube.

He's right on the social networking argument -- the corporate or "private club"-type social networking sites I've seen aren't so great when compared to the behemoths who've been able to get a foothold in the last five years. But isn't that what vCampus is, in essence? To benefit from its social networking features, one has to be a (paying) vCampus user. Maybe OSISoft could consider opening up parts of vCampus?

The problem which OSISoft acknowledged in their e-mail is that most social networking sites are blocked to many corporate users. Facebook is one thing, but I'll concentrate on YouTube in this post.

To illustrate this, let's compare YouTube to a television: if I came into the office of the CIO and asked for every cubicule in the office to have a cable TV set with every channel offered by the cable company, what would be his answer, you think? Even if all this came for free, I bet it would be "no". The reasoning being that nobody actually needs this, and it's a perfect way to lower productivity. Now if I came and asked for TV sets which only had access to all-news networks because the employees are financial analysts who actually need this to do their job, it might work.

The problem with YouTube is exactly that: it's a TV with millions of channels and there is no way to filter out content that is relevant to your workplace. Yet even though we have places where YouTube is barren due to questionable content, and rightly so, there are tech companies who keep on using it to publish their stuff. OSISoft are not the only ones, by the way -- ArcSight did the same thing two years ago.

What are the solutions?

The first one is not using YouTube at all to publish content. Which is too bad as YouTube is a really good platform to publish videos easily and cheaply. OSISoft probably doesn't want to invest thousands of dollars into a private streaming solution (and bandwidth), which is understandable, but they cut themselves out from some of their customers by choosing this path.

The second one would be for YouTube to make an "enterprise-level" version of their service, under a completely different name and domain, and charge a small fee to qualifying content publishers. Someone thought of this in 2007 and I have not seen a solution yet. The problem with this scenario is that over time, it will become another all-you-can-eat lineup. Victoria's Secret would probably end up calling themselves "enterprise-level", and I don't see where their videos would fit in a financial analyst's job.

So that's it. OSISoft's Learning Channel is on YouTube. Don't get me wrong -- I checked them out, and this initiative is very appreciated!! But since my employer doesn't let me watch YouTube, I'm stuck with watching these videos on my own time, at home or at the internet café.

Time to go grab a Latte.

O.

Wednesday, April 27, 2011

SFTP vs FTPS: tough choices


Las week, I had to design in a hurry a secure file transfer mechanism between two DMZs on a zero budget which, in a nutshell, meant reusing the Windows servers that are already there, and not purchase any third party software.

I had to choose between using SFTP, a nice protocol, and FTPS, which I've been comparing to a bastard child for years.

I don't like FTPS mostly because it's a patch on FTP. For one, FTPS is harder to firewall than SFTP; it behaves exactly like the standard FTP with a control and data connection, the difference being that TLS is used to encrypt them. Like with standard FTP servers, the server must be configured with a fixed range of passive ports, and the firewall must let these ports through. Why? Because the firewall has no way of knowing what dynamic port has been assigned to a passive data connection... it can't sniff it out the control connection either, as it's encrypted!

Even though it's not exactly what I would call an elegant protocol, is FTPS actually easy to work with? The answer is yes: I was able to install IIS 7.5's FTP publishing service in 2008 R2 and have an FTPS server working within minutes. That is good enough. And in IT, good enough is, well, Good Enough.

So, here are my thoughts:

If your server will be hosted on any kind of Unix, choose SFTP. It has been built-in with OpenSSH for years. The drawback of OpenSSH is that it doesn't support virtual users, and this can make high availability tricky; you'll need to synchronize /etc/passwd entries, even if using AD authentication.

On the other hand, if you will host the service on Windows, you might be better off going with FTPS as it is included with IIS 7.5 and there is even high availability that is possible. To support SFTP on Windows, you either need to install unsupported open-source software (unacceptable in many secure, enterprise environments) or purchase a third-party product such as WS_FTP Server (which carries a premium if you need SFTP functionality).

As for CLI clients that support automation, no matter the platform you use, there are plenty to choose from. For SFTP, on Unix just use the sftp command and on Windows, try Putty's excellent psftp.exe. For FTPS, I suggest you try cURL which is multi-platform on Unix and Windows.

So, to conclude: SFTP if using a Unix server, FTPS if using a Windows server.

In my case, I'm going with FTPS.

O.

Wednesday, April 13, 2011

Gabriel Consulting Group survey on Oracle and HP-UX

As many HP-UX admins still read this blog, I thought I'd post this. GCG is running a survey to have some insights on what you're thinking about Oracle's decision to stop developing products on Itanium, and Oracle in general:

http://survey.gabrielconsultinggroup.com/limesurvey/index.php?sid=73634&lang=en

I got this link from an article that Dan Olds posted on The Register.

I takes maybe 10-15 minutes to answer the survey and I think it is worth it, as the results will no doubt end up being published by HP somewhere down the road. Even though I'm sure they're independent, the questions and tone of the survey are not, er, I'd say, totally objective. I answered it not as an HP-UX admin (which I'm no longer), but as a systems architect for an enterprise that runs a mixture of HP-UX, AIX, Solaris, and Windows. So I tried to stay unbiased. You should do the same.

O.

Wednesday, April 6, 2011

PI DataLink Server and Excel Web App: A wedding cake dilemma



The project I'm attached to has in its list of technical requirements the installation of Excel Web App (EWA) along with PI DataLink Server (DLS). It is not clear what the customer intends on doing with it, but my guess is that it will be used to show PI data to end users using a web-based interface.

The DLS manual describes four user roles, two of which are directly related to Excel: a publisher and a reader. This pretty much resumes what it is designed for: some people, who are PI experts, develop and publish workbooks using a real Excel with PI DataLink, while common end users read them, using a browser. This apparently read-only nature of DataLink Server (which I need to confirm) is an important one, as from my understanding, it is positioned to be a simple web reporting platform.

I've recently had some time to experiment with these web features to try to predict what the developers will end up doing in the long run. I also had the hope of leaving the marketing pitch to marketers and finding what were the real advantages of going in that direction instead of sticking with a deployment based on the standalone Excel application.

I'm not an Excel whiz kid, and I'm even less a SharePoint expert. That being said, after a few mishaps, I've managed to make a proof of concept with DLS and EWA using the most dummy report I could build with my limited knowledge of PI:


The wedding cake dilemma

I'm glad to announce PI Datalink Server works as designed within the Excel Web App. However, when playing with it, I couldn't stop thinking about a three-layer wedding cake. Why? Because you see, pitting EWA against standalone Excel is like comparing that wedding cake to a slab of brownies. Both will easily feed dozens of people, but the wedding cake will take longer to assemble, be more expensive, and each layer will need to be supported by the one underneath (I also think the brownies will be tastier, but that is beyond the scope of this article).

I had no doubt that the combination of the three layers consisting of DataLink Server, Office Web Apps and SharePoint involved lots of other subsystems too. This presentation done by Microsoft last year confirmed my suspicions. IT Operations would have a hard time supporting all that if the dependency hell between all those subsystems ever hit the fan. Understandably, as a systems architect, I wasn't very comfortable in greenlighting the use of DataLink Server at first glance. Is it safe to assume that if an architecture is made like a wedding cake, it better offer something big in return or else it's not worth it?

I think that in that particular case, it will be worth it if your experts use PI DataLink a lot and they need to deploy ad hoc reports quickly to a controlled (i.e. not massive), read-only audience.


Using EWA and DLS for ad hoc reporting

The ugly sample report pictured above is what I would call an ad hoc report: It's a quickie, made in a hurry to fulfill an unexpected business need. These can be done in a matter of minutes and published as a web spreadsheet to be consumed by users who have no technical knowledge of PI. There is no need for these users to have Excel on their client, as everything runs in a stripped-down version of Excel straight in the browser. This could prove extremely useful when dealing with mobile devices in the future as I don't expect Excel and DataLink to be running on the iPad anytime soon.

Furthermore, since you don't have a bunch of standalone Excels running around in the wild, you don't have to:
  1. Ensure all users have the correct Excel version;
  2. Install PI Datalink on each of these Excels and maintain this installed base which can be substantial;
  3. Deal with the security hassles of opening up network access to the PI infrastructure to every laptop in your WAN (you only need to open it to the server running DLS).
Interesting. One might expect a lot of reports to be created that way.


Preventing ad hoc report sprawling

Now comes a question: what do we do to prevent "ad hoc report sprawling"?

I think that ad hoc reports should be deployed to VIP users as prototypes, until the time comes to move to something better if they ever need to reach a wider audience. By "something better", I'm talking about a dedicated reporting system such as Crystal Reports for the kind of reports that pull data not only from PI, but also from AF and other sources. The kind of reports that are read daily by people who make business decisions based on their contents. The kind that end up on a printer, to be read to/by upper management.

These official reports should still be designed, deployed and stored on a dedicated platform. Why? Because:
  1. EWA and and DLS have their limits; my understanding is that they can pull out data only from PI points, not AF (on the other hand, there are ways to combine web parts with DataLink Server, but I'm not good enough to try that out);
  2. I also have a feeling that using EWA as a reporting solution might cause a performance impact both on your SharePoint and PI infrastructure as nothing will prevent John Doe from pressing CTRL-ALT-SHIFT-F9 (in caps, of course) all the time to be updated on the second. It's much slower on DLS than within the real Excel, so I think there is a performance hit. This impact needs to be evaluated, and thus why I talked about a controlled audience above.

Conclusion

The possibility of deploying ad hoc reports to read-only users who don't need to have Excel at all is the main advantage I've seen up to now to deploying an architecture based on PI DataLink Server, Excel Web App and SharePoint. However, as this might be a complex solution that your IT Operations will need to take care of in the long run, you need to be sure you really need it.

O.

Am I off the track on this? Have any comments? Please post below and I'll be glad to write an update to this article.

Wednesday, March 23, 2011

Oracle dumps Itanium. -1 for HP-UX, +1 for Integrity

I'm no longer involved with HP-UX and Itanium but I've been an HP-UX geek for 10 years, and some of the readers of my previous blog (aptly named Technocrat-UX) were following me specifically for my comments on that market. I'm no analyst, but anyway my advice is free so here are my thoughts on this story which was unraveled today.

I'm not surprised about Larry's decision to stop development of Oracle on Itanium. It was just a matter of time before Oracle would try different stunts and measures - any measure - to try to save their SPARC platform and lock in customers. This one is a desperate measure indeed. I know a lot of SAP system administrators who won't be delighted to learn this. Some have been claiming for a while that Oracle is the new CA, and this couldn't be more true. If I was still an HP-UX admin, I'd be directly targeted by this decision. But I wouldn't say "Fuck HP". I'd say "Fuck Oracle", big time.

How does that look for HP-UX? Without an enterprise RDBMS, not good. Not good at all. But all is not over for Integrity. Rob Enderle picked up the story today and revealed some interesting information that was shown to him under an NDA:

Unfortunately for Oracle, HP just had a massive analyst event and in the server break-out had showcased under NDA the future for Itanium in new products. While I can’t share that future, it is NDA, and for those of us in the session there was no doubt that Itanium is going to continue. More importantly, the changes being made should make it vastly more cost effective than anything Oracle can announce on SPARC. You’ll understand what I mean in a few months, or if you have an HP relationship, ask HP what I’m talking about and you’ll have a big “ah hah” moment. But you won’t be able to share it any more than I can.

I've been thinking about his statement, trying to read between the lines. Here is my own speculation of what may be ahead. Note here that he's talking about the Itanium platform, there is no mention of HP-UX anywhere in his post. What can I make of this?

Here is what I know:

1. Enderle says that the new platform will be "vastly more cost effective than anything Oracle can announce on SPARC".

2. I bet that Microsoft are probably annoyed by Oracle (sorry, no time this evening to find an article to back this up).

3. I've learned from a trusted source (without signing a CDA) that the DL785 will retire and only the DL580 and DL980 series will be left. Which anyone can deduct from HP's web site, there is currently no G7 offering of the DL785.

...and here is what I predict:

1. Microsoft will be looking for an enterprise-level platform to harness MS-SQL, which has become over the years the "other" enterprise RDBMS.

2. All my current architecture projects are based on MS-SQL and I've learned a little about its licensing recently. This is software priced per CAL or per processor/socket (your choice), and each processor costs - well, costs a lot of money. Customers with thousands of users will want to get the most bang out of every processor they use. Does that sound like a return of Windows and MS-SQL to the Integrity platform? Hard to say if MS keeps a similar pricing with Tukwila's four cores, but it is possible. Microsoft could offer this as a vertically integrated solution pitched to customers who are currently relying on Oracle (or IBM's) solutions.

3. To cater to these mid-size Microsoft customers who aren't interested in blades (let alone Superdome 2s) for a reason or another, HP will release something like the rx5800 and rx9800 which will be based on the industry standard components of the big Proliants - namely, the 580 and 980. These are reliable and huge workhorses. By swapping only a few components, they'll save plenty and be able to offer these servers at a small premium over the x86 versions.

4. As for HP-UX? While the outcome isn't clear, I frankly don't expect Oracle to really stop releasing their RDBMS on HP-UX; I'm sure there are plenty of customers left and some bean counter at Oracle will realize the high risk of loosing them forever if a migration to SPARC is shoved down their throats. If they keep their stance, their loss. I haven't cared about Oracle for a while now.

These are my thoughts.

O.

Wednesday, March 16, 2011

Building a PI Lab for SharePoint 2010 and Excel Web App (Part 2)

Didn't have as much time today, but here are my findings:

1. A standalone ProcessBook installation requires a huge dependency package. So if you intend on deploying this on hundreds of PCs, better think of it. It should be installed on a TS, or deported using Citrix. Of course, for large scale deployments, it is better to plan using Web Parts over ProcessBook...

2. PI Datalink 2010 is supported on Excel 2010 32-bit only. It's documented in the Release Notes but as usual I didn't RTFM, and had a 64-bit installation. The PI-DL installer doesn't tell you anything about this, and it results in the add-in not being installed in Excel as it should be.

3. If SharePoint was a blind date, I think I would stay polite, possibly pay the whole bill, then say goodbye with a kiss on the cheek... i.e. leaving all options open while sending a clear message at once. Web page editing is sluggish, which is unacceptable in 2011 for a web application and I don't care how slow the back-end is. And it is hungry as hell. Even a vanilla installation revealed a lot of clunkiness with some "oops" error messages and dead links. Bottom line is I don't like SharePoint as of now, but I'll have to get used to it.

4. The drawing on my first post isn't right. I'll need to fix it in an update.

O.

Tuesday, March 15, 2011

Building a PI Lab for SharePoint 2010 and Excel Web App

I finally had some spare time today. No meetings, first time in a while. I had the chance to continue building my PI Lab and try to see how I can use SharePoint 2010 and Excel Web Services. My intentions are to harness the Web App version of Excel as much as I can, so that users can get PI data without needing Excel or PI DataLink at all. Developers will also be able to publish some pre-formatted reports on SharePoint.

I reinstalled almost everything from scratch to start afresh. It's not completed yet but here's what it should look like. Maybe it could inspire some of you. As a bonus, it shows the interactions between various OSISoft layers, which is not always clear to a neophyte such as I.


This is by no means what we'll have in production, everything has been installed with standard and typical (i.e. non-secure) settings and it is used to test and evaluate the interaction between Microsoft and OSISoft components only.

At the bottom, you have your honest-to-goodness vCampus PI System. It doesn't have any interfaces, but can provide some mock points such as good'ole CDT158 and BA:TEMP.1.

In the middle, a default dumb SharePoint 2010 server has PI-SDK feeding PI Data Services, and PI DataLink Server 2010. They, in turn, feed Web Parts and Office Web Apps (which, in fact, only has a usable Excel).

On the top, two terminal servers used to house the various clients. Why? Because The Man only installs and support IE6 and Excel 2003 in our PC environment. Nothing more. So I basically need to set up VMs which have the clients we need: ProcessBook, Excel 2010, and IE8 with the Adobe SVG player. One replicates the system which will host the clients used by our Joe Average User, which is web-only, and more worthy "Power Users", which can use Processbook if they like. Developers have their own environment with Excel 2010 from which they can publish directly on SharePoint. Hint: our production will go along the same way, using Citrix to deport the required applications. Screw The Man.

That's what is on my radar for now. I haven't finished making all these things work but when I'm done, I'll update and correct this post.

O.

Wednesday, March 9, 2011

My build of iperf is back

***********************************************************
Update: You can get my build of iperf on Windows here:

http://www.mayoxide.com/iperf
***********************************************************


Wew! My post concerning my own build of iperf for Windows has proven to be the most popular of my blog up to now. And the file got downloaded so many times, it actually busted the download limit of mayoxide.com (my domain) 12 hours before me leaving on vacation overseas. I had to renew my hosting package in a hurry, as I was keeping it grandfathered to 2004 transfer quotas - 1 gigabyte, anyone? Now I have 10 times more and the download link is back (thanks to pjmco.ca, my hosting provider, pay them a visit; they're great).

Such a response shows me that there is a need for a more "official" page for the Windows build of iperf, which is a matter of 30 minutes of handwritten HTML. I'll be working on this soon, I promise. In the mean time, my earlier blog post is a nice placeholder.

O.

Wednesday, February 23, 2011

First IMS user group meeting

Today, Cooper Power Systems held their first IMS Users Group teleconference. Building a user community was in their plans, and I am glad to see it slowly coming to a reality. I'll spare all the technical announcements because I'm not sure what can be made public. However, one of the things that stood out was a need to build up an online presence so that users can exchange ideas and solutions. Someone suggested using Google Groups but they already have a forum which they control, so my guess is that it will probably end up there.

Normally, a user group should be set up by users and remain independent. Linkedin is a good place to start one. However, as far as niche products go, IMS is as specific a product as it can get. That being said, Cooper EAS obviously doesn't have a pool of thousands of customers on which a user community can be easily built and communities need leaders. So it is better under the circumstances to keep the community leadership in their hands.

I believe that building a user group based on an unusual technology isn't necessarily a challenge... it's a feature that can be harnessed. Many specialized products have thriving user communities. For example, when I used to spend 100% of my time on HP business systems, the NonStop guys consisted of a fair crowd, tightly knit together with dedicated conferences and strong leadership, while HP-UX didn't benefit from such a community momentum (I was trying to change that slowly, but my career path steered me elsewhere). That hit me. Some people take pride in working with one-of-a-kind, high quality systems. Cooper encourages papers, so I expect some of their customers to participate in the best way they can in that manner.

Speaking of papers and conferences, Cooper EAS is planning a special IMS and SMP track at their next EAS conference. We'll see if more people can come than last year. And let's hope I'll be able to be one of them.

O.

P.S. Someone asked me if I had plans to continue my entries on IED integration in IT. I have a rough draft for part 2 ready, but didn't find the energy to finish it yet. I'll probably end up doing it eventually. I have to work from home on this on my own time, so bear with me.

Wednesday, February 9, 2011

Playing with PI Web Parts

I've spent some free time at work setting up a basic SharePoint 2010 Server in order to test the functionality of OSISoft's PI Web Parts .

This is my first experience with SharePoint and PI Web Parts, and I'm not very impressed up to now.

Here are my thoughts:

SharePoint is terribly slow

Sharepoint is unbelievably slow. Granted, I have a small VM, but I'm using the default configuration, no bells and whistles. Usability suffers from extreme sluggishness, and I wouldn't want to design pages in SharePoint full-time as my objective in life is to keep a shred of sanity. When I use Web 2.0 apps, I expect Web 2.0 speed. Not cgi-bin-like responsiveness.



Web Parts rely on SVG (and, soon, MS Silverlight)


The "interesting" graphical Web Parts rely on SVG to generate graphics, like the PI Trend Web Part pictured above. Using SVG is not a problem per se, as it is a lightweight format which gives very usable results -- the trend graphics are live, and you can hover over parts of the graphic to get more info dynamically.

However, Internet Explorer is notorious for not supporting SVG natively. Furthermore, there is no alternative to configure these web parts to make static image files... so if you have a locked-down desktop with IE8 (or, even worse, IE6) and have no SVG viewer, you're fucked. The only solution consists of installing an old viewer from Adobe that has not been updated since 2005, and has been unsupported since 2009. Unacceptable in an enterprise environment. Calling the Man to install such a viewer on my laptop would result in the Man saying "no" and laughing his way back to the bank.

What further exacerbates me is OSISoft's commitment to migrate to Silverlight in the future. Deploying Silverlight will be another complex task in a locked-down enterprise environment. Of course, Microsoft already knows how to deal with this: I think they can't bundle Sliverlight with Windows 7 due to antitrust issues, but they will find a way to attach it into the next version of MS Office. So when the Man will decide it's time to upgrade the desktops from Office 1981 to Office 2030, we might get Silverlight as a bonus and be able to see some PI Web Parts. Woo!

In a world where many intranet sites are hardwired to IE6, and nobody wants to risk updating anyone to IE8 (let alone IE9), SVG and Silverlight are critical points that need to be taken care of.

O.

Thursday, January 27, 2011

Fun with OSISoft's PI


RRDtool
has been a savior as a no-frills data historian, giving folks a free and high quality toolset to store and present time-series data. So when I first saw OSISoft's PI last year, my reaction was Oh no, not an "enterprise" version of RRDTool!

Many know that I'm very critical of expensive software that labels itself as "Enterprise", especially when it ends up not working as expected, assuming it works at all. When paying in the six digits for software, one expects it to at least be good, and provide value. But many times, it's a half-baked product, glued by imbeciles who are led by idiots, which is then sold by a dim-witted sales team who think their target customer base is a bunch of morons. That might be harsh, but it's the way I've felt for a long time with some software companies and their peddlers.

Needless to say, when I started architecting systems based on OSISoft's PI, I had low expectations.

It turns out I was wrong. PI (Plant Information), as its name implies, has roots deep into the Plant, not in the Enterprise. This is a significant difference that must have influenced its design a long way. I can't pinpoint exactly what makes it special, but at the base it's simply elegant and I feel perfectly comfortable when playing with their tools. Everything there looks like it has been put in it for a real purpose, not to show up well in feature comparison chart. A lack of buzzwords like "cloud computing" and "agile enterprise" makes me feel right at home.

There are some problems with OSISoft's suite of products, the top one in my list being the lack of good documentation. The documentation is either extremely high-level or very technical, with few images, there is no "in between" for someone like me who only needs to make a quick proof of concept then leave the implementation details to the IT team. Their vCampus subscription process is also hard to work it. And I'm getting increasingly frustrated with their support site which makes downloading each piece of software or document a tedious, three-step task.

But all these issues went magically away when I was able, in 15 minutes, to set up a mock operator screen for a mock reactor:


I like it when software lets you do things quickly, in a natural way. PI is exactly that. It comes built-in with a few data points, that can help someone quickly assemble a prototype to see its capabilities. I did this screen in PI Processbook. I can use it to show some of my colleagues , at a glance, what PI is all about; an image is worth a thousand words.

Sharepoint is currently being installed in another VM, and my next step will be to try to present data with PI Web Parts. I'd like to see if that Processbook screen can be converted as easily as OSISoft says it can. Geez, Sharepoint has just finished installing. Time to go see if I can pull it off in another 15 minutes.

O.