- Post History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
on 01-30-2020 05:40 AM
This article is about my thoughts on peer-code-reviews, using some code samples to explain my position.
In short, the current skill level, and coding (even best practices) coupled to advice from influencers can't produce the desired benefits associated to peer reviews. At this stage, instead, peer code reviews should be replaced with a dedicated code reviewer who can articulate technical excellence while imparting knowledge that leads to high levels of quality. This cannot happen between low-skilled coders. It's unfair, stunts growth, resulting in the ubiquitous practices in our space. We will only be pouring water in an already filled glass.
Reviews among low skilled coders could then be considered a socialization session to build process knowledge and build team chemistry. People that don't know code, the workings of the language, or have the theoretical background to understand the nuances that separate familiarity and readability won't be able to produce excellence, can't produce maintainable code. It often leads in the opposite direction: support of practices that inhibit excellence.
I've written my own share of terrible code, and this was not long ago but, last year, last month, last week, yesterday. It highlights the point that peer reviews without the right skillset isn't technically helpful. How can then, someone without support or time to devote to the craft, produce anything more than familiarity?
A pet peeve of mine in peer-reviews is the assumption that comments increase comprehension of code, that it helps the reader know what code is doing. But comments in our space are short descriptions for unreadable code. If there is a need to comment code, it should be to expose functionality to consumers. Not to tell the reader what lines of code are doing. Why task the reader with both comments and reading code? Especially with already cluttered code?
In code like the one below, How beneficial to updating this code would a comment be? Would it help a maintenance developer any? Yet, defacto best advice calls for comments to such. It even calls for reformatting the code for “readability”. The outcome is simply more familiar, not more comprehensible.
if (sub.indexOf('CloseHotDeskRequest') > -1) {
var incNum = sub.substring(sub.indexOf('#'), sub.length - 1);
var glide = new GlideRecord('incident');
//glide.addQuery('short_description', 'Temp Space (Doe, John)#12345#');
glide.addQuery('short_description', 'CONTAINS', incNum);
glide.addQuery('description', 'CONTAINS', 'Temporary space request:');
glide.query();
if (glide.next()) {
var assigned = new GlideRecord('sys_user');
var x = email.body.tech_owner;
var theirName = x.substring(x.indexOf(",") + 2, x.length) + " " + x.substring(0, x.indexOf(","));
var theirEmail = theirName.substring(0, theirName.indexOf(" ")) + "." + theirName.substring(theirName.indexOf(" ") + 1, theirName.length)
.replace(/\s/g, '')
.replace("'", "") + "@domain.com";
assigned.addQuery('email', theirEmail);
//assigned.addQuery('u_primary_assignment_group', glide.assignment_group);
assigned.query();
if (assigned.next()) {
glide.assigned_to = assigned.sys_id;
glide.resolved_by = assigned.sys_id;
} else {
glide.work_notes = "Potential Problem, Assigned User could not be found name = " + theirName;
}
glide.u_business_impact = 'No';
//glide.u_application = 'Desktop';
glide.sys_created_by.setDisplayValue('Agio Inbound Action');
glide.close_code = 'Closed/Resolved by caller';
glide.close_notes = "This case has been closed as it was Completed in SwingSpace";
glide.incident_state = IncidentState.CLOSED;
glide.active = false;
glide.closed_by = glide.resolved_by;
//The below variable needs to be defined to close an incident that has already been created
//This variable is required because it is need to bypass the "Prevent state change to close"
glide.u_incident_autoclosed = "autoclosed";
//glide.work_notes = email.body_text;
glide.comments = "Case Was Auto-Closed By CloseSwingSpaceCase Email" + "\n\n" + email.body_text;
glide.update();
sys_email.target_table = glide.getTableName();
sys_email.instance = glide.sys_id;
}
For code to be readable it has to be read and understood nearly at-a-glance. To improve readability, it not only needs a complete overhaul, but the business process revisited. Formatting doesn’t solve bad code, nor decrease reading times. It makes it prettier. The normal answers by peers are: format it, white space, comments, etc
A revisited version would begin something like this:
|
then further encapsulating the logic... Keep in mind that the inside of each of the functions will be about 10 lines long.
|
The goal of a review is also help decrease comprehension times; to help craft quickly digestible code. With a function like the one above. How good exactly are comments? To teach someone syntax? to clutter the function stating precisely what the code reads?
Another example not captured in peer-reviews is the continuous writing of the same boiler-plate code because it's the way things are done, it's familiar.
In this piece of code, what is this doing? How long did it take? Would comments help? can it be refactored for friendliness?
|
Is this easier to consume? Will comments make it any clearer? How many of our peers would have considered this approach?
|
If Comments aren’t to document API usage to consumers, a sign of unreadable code is the comments themselves. Spending time reading comments shouldn’t be a need with reader friendly code. They should not advisable because it's become synonymous with "best practice".
Do we know anyone less skilled peers that would be able to grab ServiceNow defacto code and improve it? Be able to identify un-scalable, missed opportunities, even bad practices? How can peer reviews will be of any help if their effect promotes the practices that themselves aren't advisable?.
Who's thought about preventing downstream changes? How about scaleably overriding functionality? Who's spoken about them in peer-reviews? Couple that to a junior working under an influencer who adamantly speaks against any professional coding practice that isn’t found in OOTB code, then what? What will happen to the craft of that junior? Will that junior will become a gatekeeper as well? We all lose.
List of problems I’ve seen in code from consultants to an aggregated sum of millions of dollars, who have sworn by having done peer-reviews, code reviews, and used linters. And the reasons I’m hesitant to support peer-reviews in the ServiceNow space at this point in time.
- Hardcoded values (of any kind)
- Commented code left behind
- Unnecessary comments
- Severe overloading (3+) of functions (less readable, more complex functionality , un-composable)
- Hard dependencies/hard binding
- Repeated code and duplication by necessity
- Isolation of code in the wrong artifacts
- Use of outdate programming practices
- 300 deep conditional statements
- Wrong patterns at the wrong time
- Trying to instantiate a factory as if a constructor
- Riddle with downstream changes
- Inconsistent formatting
- No coding conventions
- No use of GlideElement or GlideRecod functions
- Unnecessary code
- Conflating api knowledge with coding skill
- Single responsibility functions that reach out of themselves for data
- No entry points
- No logical separation or onion abstractions
- No domain identification
- Boilerplate use try/catch, null checks
- No common entry point for similar functionality (like errors, logging, HTTPRequest, AJAX, etc)
- Thousand-plus lines of code in a single script/widget
Below is an interview question for Senior ServiceNow programmers whose resume reads expert level JavaScript programmers and code reviewers for X number of years.
What can be done to declutter the code below from try/catch, while creating a single entry and common try/catch every time aFunctionToDoSomething is called?
|
The persons who've theoretically answered haven't been able to build it, this only states that we are on our way to a better place...
The answer is to wrap a variation of the function below then, adding a single entry point as well as a common error handler when using aFunctionToDoSomething. No more unnecessary tryCatch for the function, the developers don’t get to choose whatever the feel like at the moment; AND, there is only one try/catch in the entire codebase. This is what should be expected from quality peer-reviews, to improve each others' skills.
|
The next is another sample from an actual code review between two colleagues.
The consultant explained to me that he and his JavaScript expert colleague sat down for very long period of time and determined that because my function exists inside a scope, it is simply unusable.
It has to be rewritten as:
|
|
If you are planning on doing peer-reviews but don’t have the knowledge to improve the quality of your work, would it help to use the time for learning? Use the hour to do code-games, read language docs, look for different approaches. Search if something can be done differently and why, don't just do because someone else says so. It will show in your work.
I firmly believe that with time the skillset in our space will be in par with other technologies, until then, it’s difficult to support coding-peer-reviews just for face value of a quality dev process.
- 1,531 Views
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Javier,
Appreciate what you have written. Its an area that we all have to visit at some point.
Thanks
John Quinonez