I, Object

I might have mentioned this before, but I started programming with the more loosely typed languages.  Think PHP, Python, or the loosest of the loose, Javascript.

With PHP, you don’t really need to declare what type a variable will be- you can just declare:
$fortyTwo = “42”;

Python is similar, even simpler to declare:
fortyTwo = “42”

Now, if you need to change the type of a declared variable, you might have to re-cast it, but you still don’t have to provide a type annotation beforehand:
PHP: intval($fortyTwo); // $fortyTwo is now 42 – integer type
Python: int(fortyTwo) // fortyTwo is now 42 – integer type

Javascript muddies the waters a bit, as you can add a string directly to a number, but you’re probably not going to like the results:
const fortyTwo = “42”;
fortyTwo + 42; // returns “4242”

It’s safer to explicitly coerce the “type” yourself:
42 + Number(fortyTwo); // returns 84
or
42 + parseInt(fortyTwo); // also returns 84
or
42 + +fortyTwo; // a bit more confusing to read, but also returns 84

All basic stuff- but the point is that until I started working with C# earlier this year, I didn’t realize how complicated the very basic task of declaring a variable can be, and how deep into your program those complications can extend.

In a current project, we have a bit of code we use to send out an email notification.  A user submits a form on the frontend to an api endpoint, where a method on that endpoint processes the info and sends the email.  .NET has a base class for this: IEmailSender- you can extend it as needed (and the process of extending classes is a good topic for another time).

The email processing can be used to send a few different kinds of messages (password reset, sales contact form, support request form, etc).  We have different templates for each, with different variables you can replace with custom info (the user’s name, their support ticket id, a password reset token, etc).  So that template info is just passed into our SendEmailAsync function as an object.  First, we initialize the object:

support_email customer_vars = new support_email {
    first_name = loggedInUser.first_name,
    support_id = objModel.support_id,
    request_type_id = objModel.request_type_id,
    screen_id = objModel.screen_id,
    request_subject = objModel.request_subject,
    request_notes = objModel.request_notes
};

The loggedInUser object is the current user (retrieved via the Json Web Token provided on each request) and the objModel object is the info from the form they submitted.  Now we can pass the customer_vars object into our SendEmailAsync function so they can be used in the template.

But making SendEmailAsync reusable was tricky.  If a variable is going to be passed into a function in C#, you have to give it a type.  Giving it a generic ‘Object’ type won’t work- so usually you just create a simple class or interface, name your fields, and use that as the type.  You can also do useful tasks at this point, like declare required fields or even restrict what can be entered for a field:

public class support_email
{
    public string first_name { get; set; }
    public string support_id { get; set; }

    [Required]
    public string request_type_id { get; set; }

    [Required]
    public string screen_id { get; set; }

    [Required]
    public string request_subject { get; set; }

    [Required]
    public string request_notes { get; set; }
    
}

This works great, until you want to pass a different type of object to the SendEmailAsync method. For example, we want to use the SendEmailAsync method to also send mail regarding a password reset. That’s not going to have a “screen_id” field, but you can’t just pass the wrong object type- C# won’t allow it.

This was an eye opener after a few years in Javascript-land.  There I can just pass anything as an argument and be off and running.  Our current answer is to create a super basic base class:

public class general_email_template {}

Then, each subset of email template simply inherits from that class:

public class reset_email : general_email_template
{
    //properties specific to reset email here
}

public class support_email : general_email_template
{
    //properties specific to support email here
}

And so on. In the SendEmailAsync method, we use the type “general_email_template” for our template variables object argument type and it works great. Any type that extends the type the method takes is acceptable.

Another possible option that we might explore is having multiple methods with the same name (SendEmailAsync) but different parameters. That way, we could just declare the specific type on each different SendEmailAsync version and let the language decide which one to use based on what it was passed (see previous post about how C# handles methods with the same name but different argument parameters). One approach might be better than the other, but this one works for now!

As an aside, another major difference between a statically typed language like C# and Javascript when performing a basic task is looping over an object. Coming from Javascript, it seems much more complicated in C#:

foreach(var v in variables.GetType().GetProperties())
{
    string prop_name = v.Name.ToString();
    string prop_value = v.GetValue(variables, null).ToString();
}

Than in JS:

for(let i in variables) {
    let propName = i;
    let propValue = variables[i];
}

The type system means you can be much more confident about what you’re getting as each variable, but it can definitely be an adjustment coming from a more relaxed language!

Advertisements

Staying on Script

I like npm scripts.  I’m sure real programmers can memorize long strings of command line gibberish without even waking up the full 10% of our brain capacity movies tell me is all we have access to, but I mess those commands up all the time.

Don’t get me wrong- it’s fun to be able to cruise around a system on the command line.  You feel like you really know secret stuff.  I like that you don’t really even need a monitor to get real work done on a server- just a secure login method and some useful commands.  ssh in, check out where you are and where you can go (ls -la), change directories and do it again (cd var/www && ls -la).  See something you want to check out?  Cat is your friend (cat package.json).  See something you want to change?  Use Vim (vim package.json).  Though be sure to install it first (sudo apt-get vim -y) and be ready to look up the commands just to get around the page (note: I am not a Vim expert- I only use it when I have no other option).

But when it’s time to get to work, it’s really nice to just be able to type ‘npm start’ and have your dev environment spin up.  Particularly when you’re a less than stellar typist.

Npm scripts just go in your package.json file- in the appropriately named “scripts” object.  They can be extremely simple- for example, a project built with the angular cli automatically sets up a couple: “test”: “ng test”, “start”: “ng serve”.  These just make it easier to find those (admittedly already simple) commands.  A developer will know they can run “npm start”, but if they’re not familiar with the angular cli, they might not know about “ng serve” right away.  Npm scripts work well as aliases- unifying those commands under one umbrella- and one less thing for me to try to remember.

You can extend those simple commands as well.  I made a simple edit to my start command: “start”: “ng serve –open”.  This passes the ‘open’ flag to ng serve, automatically opening my default browser.  There are flags for many other options- check out the docs!

But where Npm scripts have really shined for me is in abstracting away any directory changes and environment settings that may have to be done in order to fire up other aspects of a project.  Sure- when you’re just working on the frontend, it’s easy to remember that you run “npm start” from the same directory your package.json resides.  But one project I’m working on uses .net core as a backend API.  It runs locally, so one option is to fire up the full Visual Studio program, but this feels big and slow compared to VS Code now.  Luckily, .net core now comes with a command line interface.  I can run the command “dotnet run”, and my backend dev server starts right up.

But there are complications.  This executable is not run from the same directory as package.json- it’s technically a completely different project in a completely different folder.  An environment variable also has to be set to “Development” in order for it to run properly.

But Npm scripts don’t care!  A script will just take your commands and run them.  So, my new script is:

“backend-start”: “cd ../path/to/dotnet/project/folder && set ASPNETCORE_ENVIRONMENT=Development && dotnet run”
The path really is longer- this is just the way a .net project is initialized.  The point is- instead of typing everything to the right of the colon, I just type everything to the left (from my package.json directory).  The Npm script changes directory, sets my env variable, and starts the process- pretty cool and useful!  I created another to publish the backend and can now forget those file paths and environment settings and other commands.  More brain space for me!

What’s in a name?

Working on a team can be tough.

It’s necessary for rapid, effective development- a team can work much faster and has a much larger knowledge base than a solo developer.  It’s also necessary for security and building a robust application- a team of eyes will spot issues and possible improvements that a solo developer might miss.  So I understand the many benefits.

But it also has pitfalls.  We are working on a .net API for our Angular application.  As part of this API, an admin can change the email address in a user’s account (which also serves as their username).  During testing, we found that edits to the email weren’t “taking”.  We could update anything else, but when trying to update the email, the dreaded “500 Server Error” message kept popping up.

So I started digging.  The error in the stack trace shouted at me: “Object not set to instance of an object” – which, though past frustrating experience, I learned is a null pointer exception.  Somewhere in the API’s Put method, we were referencing a property or method on an object that had never initialized.

But I just could not see it.  I started working as a developer solo- which can be difficult in it’s own way, but one advantage is that I (usually) know where I put things.  Where I initialize variables, where I add logic, etc.  Not always- sometimes lazy present me really screws over frustrated future me- but generally it’s pretty quick to find the path of my previous thinking.  I couldn’t see the error in this codebase by looking, so I started logging to the console (see previous post!).  Eventually, I got to the null object- a duplicate call to a method we use to get a user info object.

We work with another team remotely.  They do great work and having worked solo, I appreciate that they each have a head full of tips and tricks and knowledge that I may never have.  But sometimes they might not examine anything too deeply before starting on the coding.  They receive a job and a prototype or existing file and build it/update it with new functionality- they don’t necessarily analyze what’s already there for conflicts.

In this case, they had been tasked with adding a feature to the .net code to allow an admin to edit the access level of an account (employee vs admin).  When they added this to the update method, they ignored the code above that covered updating the rest of the account.  The method variables were up there- including the one that instantiates the object representing the user being updated.  This is done with a reference to their unique user id (which is never changed).  In the new code, an attempt was made to create a user object by referencing their email address.

And there’s the bug.  You can’t get a user by email if they’ve just changed that email (or if you get really unlucky, and that email already exists in the system, you’ve got an even trickier bug).  The fix to this bug was ridiculously simple- remove this 2nd user object and update the calls below it to reference the original user object.  Works like a charm now.  Finding the bug was trickier- it was hidden in the middle of a method and the reference to the user was not named something obvious (it was called “roles”).  When I scanned the code, “roles” didn’t really register as being involved in an email account update, but it was the culprit.

A team can put processes in place to mitigate the issues of distributed development- but it requires some extra work.

  • A style guide and naming conventions is a great step.  Make sure all your devs are on the same page by putting the requirements on an actual page (or a virtual one).
  • Take it a step further and create a set of linting rules to enforce the guide- maybe make an update pass all the checks before you can push to git or svn.
  • Do code review sessions- instead of moving forward with new code every day, take the time to review what was written yesterday.  Just because something compiles and outputs what you want doesn’t mean it should be in the final product.

This can be tough to sell to a supervisor- there’s no real immediate benefit as far as project progress or new tasks checked off a list- but it can really save time down the road.  Sometimes, it takes one developer to push for such a shift- a shift that will make your and your co-worker’s lives easier.

Multiple Method Madness

Arguments are generally important.

Different languages have different ways of dealing with them- but a couple have really interested me lately.  I work mostly in Javascript- which generally doesn’t care at all about your arguments.  Sure, you might have a function that accepts “error” and “callback”, but JS doesn’t actually care if you pass them or not.

That can be kind of useful- though sometimes a bit confusing.  Sometimes, I’ll have a function that has a couple different possible outcomes.  Simple example: In our Angular application, there are multiple methods that get info from the database (using observables).  Sometimes, we will need a final callback to fire after the data is retrieved.  You can pass 3 ‘blocks’ to a subscription: the success, error, and finally blocks.  That finally block is a great place to perform a task like hiding a ‘loading’ animation.

But in one case, the method calling the .subscribe function was on a shared service, and the function that needed to be called in the ‘finally’ block had to remain on the component itself (it created a d3.js chart and needed access to the DOM).  So, we added an argument to the method on the service for callback.  Something like:

getData(cb) {
    backendService.get(url).subscribe(response => {
        //set local variables and manipulate data response here
        this.myData = response.map(d => d);
    }, error => {
        //handle errors here
        console.log(error);
    }, () => {
        //clean up and call callback
        if(this.myData && callback) {
            cb(this.myData.length);
        }
        hideLoader();
    });

The ‘finally’ block doesn’t take any arguments- you just perform tasks. In this case, I check to make sure a callback has been provided (see below for details), and to make sure the data came in (check the length because I think a simple boolean check on an empty array still returns true- but 0 will return false).  If it did, I fire the callback (sending the data to the chart and drawing it to the screen.

To call the method, we have a couple options.  Pass a callback function or don’t.  Like I said- JS don’t care.  If I don’t have a graph to draw but need to get data, I just don’t pass a callback.  The check in the ‘finally’ block to make sure ‘callback’ exists before it’s called prevents any errors.  If a callback is provided, it gets called when the data is available.  Pretty cool!

In a statically typed language like C#, things are a bit different.  C# definitely cares about arguments.  If you try to pass nothing to a function that has arguments, you get an error.  If you try to pass an argument to a function that takes none, you get an error.  If you pass a string argument to a function that takes an int argument, you get an error.

You get the idea.

Strict, but no less useful than the JS method- it just requires a different mindset.  You sacrifice flexibility for security.  You know what that function takes and it will not run with anything else.  Probably leads to less bugs down the road.

And you have different options for flexibility.  In C#, you can have two functions with the same name- as long as they take different arguments:

public int addStuff()
{
    return 2 + 2;
}

public int addStuff(int y)
{
    return 2 + y;
}

Stupid, simple example- I know.  But still a pretty cool concept.  You get the same flexibility at the call site as you would with JS.  The addStuff function doesn’t care if I pass an int or no int (as long as I have both methods accounted for above).  It will still give me what I want.  The above example wouldn’t be very useful, but we did use it for a method in our live application.  I’m a bit new to the .net world, but it’s still cool to see the parallels with something familiar (and the differences).

Hold My Beer

The rallying cry of a redneck about to do something really stupid.  I think we should have a version in the programming world where it means: “I just spent half the day writing this code and not compiling/running it once- here goes!”  You’re likely about to hurt yourself in a spectacular manner.

But this time, for the first time ever, it actually worked (results not typical).  I was adding a really simple module to allow an admin to add new activity types (for audit logging).  Usually, I will recompile the .net code at multiple steps, but with Visual Studio, that has become less important- it will yell at me with those red squiggly lines if I really mess something up.  It’s my complete newbieness with C# that usually causes the issues- generally trying to use JS syntax and finding that it just doesn’t work that way.  Imagine my surprise when I finally ran the app and it not only compiled without error, but also did what it was supposed to do!

But no one would recommend this as a best practice.  And really, most modern development tools have checks against this built in.  There are Visual Studio’s aforementioned red squigglies that burrow though your eyes and into your brain, relentlessly mocking you and your stupid mistakes.  Also, on the front end, when working with a transpiled language like Typescript (which we are and it’s awesome), you usually have a “Typescript compile watch” constantly running.  Whenever you make a save to a file that’s being watched (usually all .ts and .html files within your app folder), that watcher will automatically attempt to re-transpile the entire thing.  If you’ve done something really dumb, the command prompt will yell at you (no red squigglies, but oh well – also, the word ‘squigglies’ also results in a red squiggly line!).

Of course, an app can compile or transpile and still be completely broken, so it’s definitely good to test frequently after small changes.  Even when your dev tools don’t force you to.

There’s a great line in the Mel Gibson (I know) ode to violence and America “The Patriot”.  He’s giving his young sons quick tips on how to kill British soldiers in the most effective manner possible.  He says “Aim small, miss small”.  It’s cheesy and I have no idea if it actually makes you a better shot, but it is solid advice to co-opt for writing code.  Add or modify something small and check your results.  You might miss- but it will take a much smaller correction than if you spent hours editing multiple files across your entire project, only to find you’ve broken the whole thing.  Or missed that evil Redcoat!

The metaphor might have broken down a bit at the end there, but I think you get the point.

Architect for Destruction

So we’re building an Angular 2 application (I may have mentioned this before).  It’s been a fun and frustrating experience- but in reality, most of that is our own fault.  Using a still developing framework comes with risks- the main one being that both the framework itself and the best practices for that framework can change at any time.  Just when I figured out how to really use the EventEmitter, I read that we should just use Subjects instead.  And they’re great- both an Observable and a Subscriber in one!  But it can be frustrating to learn that all you learned is obsolete as soon as you learned it.

Anyway- I’m actually really liking Angular 2 (I just also like to complain).  The tougher part has been the .NET backend.  I had no prior experience with .NET, so it’s been another steep learning curve.  And that separation of my knowledge base (good with JS, bad with C#) led to an idea for separating our project code bases.  Is it a good idea?  I have no idea.

The premise is to have the Angular front end running on a simple Node server.  That keeps our frontend project in the JS space and Angular and Node just seem to play well together.  All that Node server needs to do is serve up the front end, so it was fairly easy to build (and I really like Node, so that part was fun).  Then, the .NET backend runs on IIS (which also play well together).  Each backend module will be its own .NET project (one for login/users, one for product A, one for product B, etc).  Then, we just treat those as internal APIs- the frontend asks for the data (JSON) and displays it, the backend’s job is just to provide it.  They communicate via JSON web tokens for authorization.  We don’t have to try to get Angular working on IIS and have it all bunched up in one monolithic project.

It seems like a good idea.  Modularization is all the rage right now, and it’s also something that was emphasized with this project’s design (some clients may subscribe to some features, while others subscribe to different features- they need to be self contained).  This way, it also paves the way for exposing public facing APIs in the future- the structure is already there because we are already using it internally, we just have to provide the key.

Again- I don’t know if this is a good idea or bad idea or if it’s the way something like this is actually supposed to be done.  I never claimed to be a software architect- I was a JS developer that got pushed into the backend.  But it does seem to work, and does seem to check off all the boxes for the project specs, so we’re going to give it a shot!

Gulp it down- tastes great!

Automation and task management.  There was a time, I’m told, when these terms were the domain of the devops team alone.  A lowly javascript developer didn’t have to know how to automate tasks- we were just supposed to write setTimeouts and jQuery and try not to overload the browser with DOM repaints.

But times change.  React and Angular and all the frontend frameworks made it easier to create awesome online experiences- but came with a price.  Setup has become a real chore in the frontend world.  Gone are the days when dropping your script tags at the bottom of your html file was best practice.  This is the time of bundlers and task runners and module management systems.

So, a while back, I dipped my toes in Grunt (I know, I’m way behind the curve) and liked it.  It’s basically just a big config file- you can import and use different tools (grunt-uglify, grunt-concat, etc) to minify your code as well.  Worked great, but I’ve always been more a fan of writing functions than just passing config options- so I heard Gulp might be a better choice.

And so far, it’s pretty nice.  The project I’m working on uses an Angular 2 frontend on a .net backend (sqlserver database).  I do a bit of work on both, but some of the team only works in Angular 2.  Some of them won’t use Visual Studio either, so I tried to come up with a way they could continue to work on the front end without needing the backend at all.

My (probably wildly inefficient) answer was to have a couple localOnly versions of the Angular 2 services dealing with the backend (just 2 right now).  They just use mock data (stored in simple json objects that match the structure of the actual database) and return the same format as the actual functions (so we don’t have to update every component that calls them).

Works great- but how to make sure the localOnly files are used locally, but the ‘real’ ones stay on the full project?  A Gulp library provides the way!  gulp-rename makes it super simple to rename and move files.  So, when a frontend worker updates their project from SVN, they just run ‘gulp frontendify’ afterward- it handles the file renaming/movement and we are good to go!

I know- it’s a really small start, but it works, so I’m happy for now.  And the best answer is probably the new hotness: containers.  Each worker just runs the same container and all dev/production environments are uniform.  I’ve started researching Docker and it’s very cool- but it’s also very powerful and a bit complicated, so I have a lot of work to do!

I know we like to take sides and have battles.  Grunt vs Gulp vs NPM scripts vs Docker – but the real answer is that most of them work just fine.  Pick one you like and go with it.  Worst case: you’ll have to learn something new a little way down the road.