Archive for the ‘interests’ Category

Agile Project Management with Scrum

Saturday, August 18th, 2007

I just finished reading Agile Project Management with Scrum by Ken Schwaber. It was an easy read with entertaining war stories, but when it comes to the Scrum methodology itself I find myself a bit torn.

On the one hand, it seems like an entirely reasonable and sane way to manage a software project with constantly changing requirements while producing a minimal amount of overhead in the form of paperwork.

On the other hand, it seems that you can’t have a software development methodology without high-priced consultants to show you how to do it. Honestly, though, Scrum is simple enough that I can’t see why you’d need one…

Unless, of course, they invent their own jargon, in which case it might sound pretty complicated. It might even sound so complicated and different from what you’re doing now that you might decide that a consultant sounds like a pretty good idea after all. Hmmm…

Let me save you the money. Here’s a description of Scrum using the jargon:

  1. At the beginning of a Sprint, the team meets with the Product Owner and the Scrum Master to update the Product Backlog
  2. The team creates a Sprint Backlog by selecting the items in the Product Backlog that it will turn into sashimi1 during the Sprint
  3. Each morning, the team holds a Scrum to coordinate the tasks it will work on
  4. At the end of the Sprint, the team holds a review meeting where it demonstrates the work accomplished
  5. All meetings are time-boxed
  6. Once a cycle is completed, it begins again, updating the Product Backlog for a new Sprint

And here’s a description of Scrum in English:

  1. Every 30 days, the team meets with the Product Manager and Team Lead to update a prioritized list of features and tasks
  2. The team decides decides how many of the top priorities it can handle in the next 30 days, and commits to finishing them completely
  3. Each morning, the team holds a short meeting to discuss current status of the tasks on the list and address any issues that arise
  4. At the end of the 30 days, the team demonstrates the work accomplished to the stakeholders
  5. All meetings are limited to a fixed time (e.g., 15 minutes for the daily status meeting) in order to keep the work moving forward
  6. Once a cycle is completed, it begins again, updating the list of features and tasks to reflect new prorities and choosing those to be accomplished in the next 30 days

That said, it’s still worth reading. The stories of teams using Scrum in the real world are fun to listen to, and you might even learn something. Particularly when it comes to Software Engineering, it’s easy enough to get caught up in a lot of abstract theory about the software lifecycle (or raw fish, or whatever) that it’s very helpful to see how it’s actually put into practice.

I have to say, though, that when it comes to consulting, Schwaber almost lets the cat out of the bag. In Chapter 5, The Product Owner. He tells a story about working with a company where both team and management were pretty skeptical of Scrum, so he decided to keep it low-key:

… I told them that we used a prioritized list of things that they wanted done to drive development cycles of one month. Every month, we’d show them completed functionality, we’d review the list of what to do next, and we’d figure out what the team could do next… Scrum seemed simple, easy to understand, and [...] very straightforward.

Of course, if you put it that way, how are you going to make any money? Certified ScrumMaster? Sounds good. Certified Guy-Who-Keeps-the-Prioritized-List? Not so much.

Bottom line: if you want to know about Scrum, read about it on the web, then try it. If you’re like me, you’ll read the material available on the web and figure that it seems way too simple — there must be more to it, and decide that you need to purchase a book or two to really understand the methodology.

Well, yes and no. If you like reading (and I do), and if it makes you feel better (and it did), go ahead and get the book. It’s short read (fewer than 150 pages), it’s pretty entertaining, you’ll probably learn something, and it might even fire you up, thinking “hey, yeah, this could work!” On the other hand, if you’ve read the available materials on the web, you’ve pretty much nailed it. The only way you’re going to get any better at it is by doing it.


1At least, I think that’s what sashimi means; the term is used before it’s defined, and it doesn’t appear in the book’s glossary. From searching the web, it appears that it might be the name of an actual Japanese project management methodology. Or possibly just raw fish.

Pop-quizzes and pedagogy

Tuesday, March 7th, 2006

A study from Washington University in St. Louis finds that

quizzes — given early and often — may be a student’s best friend when it comes to understanding and retaining information for the long haul

Say, that gives me an idea…

Best science headline ever

Monday, February 20th, 2006

From New Scientist: Hand waving boosts mathematics learning.

So don’t complain when I do it in class: it’s a pedagogical technique.

On a more serious note, blame computers for this one: Mathematical proofs getting harder to verify:

“Twenty-five years later we’re still not sure if it’s correct or not. We sort of think it is, but no one’s ever written down the complete proof”

DenyHosts

Monday, July 18th, 2005

SSH dictionary attacks got you down?

DenyHosts parses the SSH log, tracks attempts, and automatically updates /etc/hosts.deny.

(You are using TCP Wrappers, aren’t you?)

Blueprints for High Availability

Saturday, July 2nd, 2005

I started a new job in May in Quality Assurance at FileNet Corporation. I know what you’re thinking: QA? I thought you were (or were pretending to be) a UNIX sysadmin? When did you become a software tester?

Well, this isn’t your average QA job. I work in the deployment group. Our job is to make sure that products work when they’re… well, deployed. As in, sure, all of your software tests passed in the lab, but what happens when we try to deploy it in the Real WorldTM?

See now, that’s a sysadmin’s dream — getting to play with lots of cool technology (firewalls, load balancers, clusters, SANs, app servers, database servers) in an environment where breaking something not only isn’t a disaster, it’s actually part of your job.

Of course, if you’ve never built a cluster before, there’s a bit of a learning curve. So I decided to start where I always do when confronted with a new technology: the bookstore.

Since much of our work is focused on High Availability and Disaster Recovery configurations, I thought
I’d start there. The first book I picked up was Blueprints for High Availability, Second Edition by Evan Marcus and Hal Stern. The authors ought to know what they’re talking about: Marcus is a principal engineer at VERITAS, and Stern is the CTO of Sun.

The book is structured around a list of technologies and practices arranged into an “Availability Index.” Think of it as the OSI model for HA. At the bottom are the fairly straightforward things that everyone should be doing to ensure availability such as buying reliable hardware and making regular backups. Each layer works toward increasing levels of availability (and cost) with technologies such as clustering, replication, and failover. And they’re right, they really are layers — there’s no point wasting lots of money building a global cluster to fail over between geographically separate sites if you haven’t invested in fault-tolerant storage and redundant network connectivity.

Having described the Availability Index, the authors provide a general introduction to the field, including its jargon (e.g., MTBF, MTRR, sigmas, and nines). Especially helpful here is a chapter on “The Politics of Availability,” describing how technical personnel can get management buy-in. This is important, considering that (a) sysadmins aren’t always particularly good at communication, and (b) HA technology tends to be expensive. If you haven’t built the case for availability, expect resistance.

Assuming that you’ve gotten the green light, we begin working our way up the Availability Index, starting with (my favorite) good system administration practices such as change control and consistent system configuration. The next three chapters cover storage management issues, including backup and restore, volume management, and RAID. The final chapter on storage is a good introduction to SANs, NAS, and storage virtualization (especially helpful to me as a relative newbie to “enterprise storage” — I didn’t know you could do all that.)

Having taken care of local storage, the next chapter takes on networking, discussing the different ways in which networks fail and options for building redundant networks. Redundant network connectivity leads naturally to a discussion of Data Centers and environmental issues such as racks, redundant power, and cooling.

The chapter on environmental issues ends by discussing something completely different: system-naming conventions. This happens to be one of my pet peeves, and I think the authors are dead-on. Too many try to name their systems using some sort of code: if you’re working in the Orange County office and you have three machines running AIX, please don’t name them oc-aix01, oc-aix02, and oc-aix03. That kind of thing may work well for network equipment such as switches and routers (after all, what’s important about a network device if not its location) but it’s a horrible idea for systems: it’s hard to remember, and it’s hard to communicate. So you’re on the phone, in the middle of a noisy data center, and you’re under pressure to get things back up and running immediately. Now which one were you supposed to reboot — was it oc-aix02 or oc-aix03? I can’t remember. Damn…

On the other hand, if all of your machines have “real” names (cartoon characters, say) are you really going to forget whether it was linus or snoopy that you were supposed to be working on? And if you were planning to tell me that you’re encoding important information in the names (e.g., the OS they’re running), I simply counter that you lack imagination: name the AIX boxes after cartoon characters, the Linux boxes after characters in Lord of the Rings (good Lord, people are using these for baby names?), and the Oracle servers after Navy ships.

Whew. Ok, back to the book. The next chapter discusses people and processes for availability, including maintenance plans and vendor management. The discussion reminds me of another of my favorite books, Limoncelli and Hogan’s Principles of Network and System Administration. Actually, come to think of it, so did the previous chapter. Do yourself a favor and get both.

The next couple of chapters cover issues with applications, including the special requirements of NFS servers, web servers, and database servers. The authors describe the different kinds of things that can go wrong with applications (memory leaks, network connectivity issues, buffer overflows, hung processes), as well as techniques for sharing state among multiple instances of an application and checkpointing in case of a failure.

Finally we reach the heart of the matter, at least as far as I’m concerned: there are four chapters devoted to clustering, failover, and replication. You won’t learn everything you need to know (and in fact, if what you need to know are technical details of particular products, you won’t learn anything), but it’s a good introduction to the components of an HA system (virtual IP addresses, shared disks, heartbeats), the options for configuring clusters (active-passive, active-active, service groups, N-to-1 vs. N-plus-1), and the issues such as fail-back and split-brain.

Next comes a short chapter on “Virtual Machines and Resource Management,” which seems out of place. Perhaps its material should have been relocated to the chapter called “A Brief Look Ahead” on future trends such as iSCSI, InfiniBand, and grid and blade computing. The book was published in 2003, but much of this technology seems to still be in its infancy.

The flow picks back up with the final major chapter, on Disaster Recovery. This chapter is much less about technology than about planning and logistics. It’s important, I don’t deny it, but personally I was looking for a discussion of global clustering.

Finally, the Second Edition was written after the attacks on September 11, 2001, when businesses really started to think about not just HA, but Disaster Recovery and Business Continuity. This edition includes a chapter called “The Resilient Enterprise,” describing how, despite losing their offices and even the trading floor itself the morning of September 11, The New York Board of Trade was able to recover and be ready for business by 8pm that evening. Now that was a Disaster Recovery Plan.

This isn’t a terribly technical book, but it’s a good introduction. If you’re just getting started, start here. If you need to configure a cluster, start elsewhere. The next book on my stack is Shared Data Clusters, by another engineer at VERITAS, and it appears to contain more technical details.

Full-Disclosure Weekend

Sunday, May 15th, 2005

Symantec Worm Simulator

Wednesday, May 11th, 2005

Symantec has released a Worm Simulator. I can’t tell whether this is just a sales tool (”Oooh, look at the scary worm! Buy stuff from us or the worm will get you!”) or if it could be useful as a research tool.

If you’re running Windows, download it and let me know.

The DNS Poisoning Attacks

Friday, April 8th, 2005

As of this post, the latest update from SANS was here.

The attacks are serious enough that the Internet Storm Center has raised their Infocon level to “Yellow.” I know this because the icon in my system tray has turned yellow and started flashing.

SIGINT

Wednesday, March 30th, 2005

For those of you who are interested in spy-stuff, I recommend the new book Chatter: Dispatches from the Secret World of Global Eavesdropping by Patrick Radden Keefe.

To quote Scott McNealy (CEO of Sun Microsystems): “You already have no privacy. Get over it.”

The Secret Service and Distributed Computing

Tuesday, March 29th, 2005

The Washington Post has an article on the Secret Service’s internal system for cracking encrypted files. Sort of their own distributed.net.