Mythbusting: Comfort zones are always negative in software development. Is that true?

Mythbusting: Comfort zones are always negative in software development. Is that true?

Curated by: Sergio A. Martínez

Being a software developer is a pretty sweet gig. There’s something to be said about a job where you get to work with the latest technology, solve complex problems, and see your work used by people all over the world. It has challenges, of course; burnout tends to be common, and with the ever-changing landscape of software, it can be easy to feel like you’re constantly playing catch-up. However, even with these situations, software development is a career where it’s easy to feel passionate and find a niche where your skills can shine.

Are-comfort-Zone-always-negative-icono

And one of the great things is that there’s always something new to learn. Whether it’s a new programming language, a new tool, a new framework, or a new way of approaching a problem, keeping your skills sharp is essential to being successful in this field. This is why some people think that being too comfortable in a software job can be bad for learning and innovating. After all, if you’re not being challenged, it’s easy to get complacent and fall behind the curve. Right?

The common idea seems to be that, when it comes to creating software, you need to get out of your comfort zone to progress; many developers look for a constant challenge, striving to grow their skills as much as possible, and mastering lots of different disciplines and technologies for the thrill of it. However, there’s also a lot to be said for having a solid foundation of knowledge and experience to build on, which leads us to the question: Is it bad to keep a comfort zone in software development?

It comes down to what somebody wants out of a career”, says Helena Matamoros, Head of Human Capital at Scio, about the context behind a comfort zone. “It’s definitely not wrong if you are only interested in having a stable and well-paying job where you are skilled, deliver the expected results, and can manage a pretty defined work-life balance. If you are good at what you do, who’s to say that’s the wrong path to take? Being comfortable in the job can lead to greater creativity and innovation, as you’re more likely to take risks when you’re not constantly having to worry about making mistakes.

The skill behind developing skills

Are comfort Zone always negative 2

“Comfort zones” are usually misunderstood, giving them some of the negative qualities that many people give to them. A comfort zone, after all, is an important concept in psychology that refers to a behavioral state within which a person feels relaxed and comfortable, a necessity when people are interested in learning new skills. The first time you try something new, it’s easy to feel anxious and uncertain, but as you become more familiar with the task, you are more open to taking risks and trying new ideas, which is essential for learning. In the words of the Psychology Spot article “What is the Comfort Zone – and what’s not?”:

“[Comfort zones are] a ‘space’ that we know completely and in which we control almost everything. The habits that we follow with assiduity are those that allow us to build that comfort zone since we know exactly what we can expect from each situation. By minimizing uncertainty, we feel that we have everything more or less under control, so we believe we are safe.

Some of the negativity around comfort zones come from believing that the “comfort” part equals “complacency” or “stagnation” instead of “control”, or the feeling of knowing exactly what you are doing in a given situation, pretty important when performing a job that requires as much focus and understanding as software development. How can a person expect to take a risk if they feel uncomfortable about it? Or learn everything there is to know about a subject if they are not willing (or able) to stay focused on it for a significant period?

I’m not interested in people getting rid of their comfort zones“, says Rhonda Britten, a bestselling author on the topic of fear, as quoted by this WebMD blog. “In fact, you want to have the largest comfort zone possible — because the larger it is, the more masterful you feel in more areas of your life. When you have a large comfort zone, you can take risks that really shift you.

In other words, the idea of “stepping outside of your comfort zone to grow” might not be a useful framework to understand how people acquire and develop new skills, it’s more about knowing where your talents lie and making the most of them when given the chance. A different approach might be useful, especially if you, as a developer, are interested in learning as much as you can, with a rock-solid foundation that lets you push through every new challenge.

Carving up your comfort zone

Are comfort Zone always negative

When used properly, comfort zones can be a positive force. After all, it’s only natural to feel more comfortable working with familiar technologies and tools, and it can be more efficient to stick with what you know rather than attempting to apply something new every time you join a project. But this doesn’t mean you shouldn’t try to learn new skills if you feel confident enough in what you already do. For example, if you’re already an expert in a particular programming language and can fall back into it if the need arises, why not learn a new one? Or go a step beyond and go into different parts of the software development process, like QA or Project Management? This way, you have two possible outcomes: you either learn a new, valuable skill you can use, or you discover an area that is not right for you and can go back to what you are good at.

If your objective is to grow within a company, climbing up positions and leading a team of people, a comfort zone in the classic sense will not help you”, continues Helen, describing a bad application of a comfort zone. “It’s more of a personal preference about how you apply your skills, and it doesn’t mean complacency or a ‘quiet quitting’ situation. It’s about building a rock-solid base to take new risks and opportunities, extrapolating what you are good at into other areas where your talent might shine. Being comfortable doing so is an advantage, not a bad thing.”

Ultimately, whether being comfortable on the job is good or bad for learning and innovation depends on the individual and the situation. There’s no question that comfort is important in any job; if you’re stressed out and uncomfortable, it’s going to be pretty hard to focus on your work. But when it comes to software development, focusing on honing your existing skills and becoming an expert in your field is different from being constantly pushed to learn new things and innovate. While it’s important to push yourself occasionally, there’s no need to completely overhaul your workflow every time you hit a snag. 

Of course, there’s a balance to be struck here. You don’t want to be constantly stressed out and on the verge of burnout, but never being challenged or forced to step outside of what you are used to doing is not good either as you may find that your skills start to stagnate. Enjoying what you learn, applying it, and mastering it as a skill is never a bad thing.

The Key Takeaways

  • One of the best things about software development is learning new skills and growing as a developer, but there are some misconceptions about how that’s achieved.
  • Typically, comfort zones are seen as something negative, but that’s because they tend to be misunderstood.
  • Comfort zones, when used properly, can give you the confidence to learn new skills, take risks and open yourself to mastering all kinds of technologies.
  • Being comfortable in your skills is never a bad thing, and in fact can be a great starting point to always learn something new.

Scio is an established Nearshore software development company based in Mexico that specializes in providing high-quality, cost-effective technologies for pioneering tech companies. We have been building and mentoring teams of engineers since 2003 and our experience gives us access not only to the knowledge but also the expertise needed when tackling any project. Get started today by contacting us about your project needs – We have teams available to help you achieve your business goals. Get in contact today!

Mythbusting: Is learning new frameworks always beneficial for the development team?

Mythbusting: Is learning new frameworks always beneficial for the development team?

Curated by: Shaggy

Half of the positive outcomes in software development come from choosing the right approach to it. Keeping your processes updated is critical to ensure that a project goes smoothly, as software development is a complex process that requires careful planning and execution. To that end, there are a variety of different approaches, each with its advantages and disadvantages, that are ultimately chosen by the specific needs and goals of the project. So, with that in mind, let’s talk about frameworks.

Is-learning-new-frameworks-beneficial-for-the-development 1

In software development, a framework is a set of tools and libraries providing a common structure for building applications. A web application framework, for example, may include libraries for handling requests and responses, session management, and template rendering, as well as functionalities for routing, authentication, and other common tasks. By providing a structure, frameworks can make development easier by reducing the amount of boilerplate code needed, in addition to providing a consistent approach to solving common problems.

That’s why software developers and project managers are always on the lookout for new tools and frameworks that can make things more efficient, ensuring they remain updated and knowledgeable in the latest trends. However, there is often a trade-off between using the latest and greatest technology and having to learn how to use it effectively; anything new added to an established workflow will include a learning curve, and in some cases, the latest technology can slow down a team rather than help them achieve an outcome more efficiently. 

Developers may need to spend time learning the new tool properly before they can start using it effectively, especially if the new tool is different enough from what the team is used to, causing more problems than it solves”, says Adolfo Cruz, Partner and PMO at Scio. “Ultimately, whether or not developers benefit from using the latest frameworks in software development depends on the particular case. It’s important to weigh the pros and cons of each new tool before making a decision.

Is it a good idea to constantly adopt new frameworks?

Is-learning-new-frameworks-beneficial-for-the-development-3

There’s no one-size-fits-all answer to this question, but we can see on paper why this might make sense; by using the latest frameworks, a team can take advantage of the most up-to-date features and capabilities, and they are generally more efficient than older ones, which can save your team time and resources in the long run. Moreover, choosing a new framework shows that your team is committed to keeping up with the latest trends and technologies.

In my opinion, [frequent change of frameworks] can be a negative thing, because sometimes the latest version still has some kinks to work out”, says Carlos Estrada, Lead Application Developer at Scio. “Using a technology that has already been tested by the community or by your team can save you a lot of bugs and headaches. Is not wrong to try the latest framework at every opportunity if you are part of a start-up that’s barely getting off the ground, but for a more established company with clients and expectations, I wouldn’t recommend it.

With that in mind, adopting a new framework is not something to be taken lightly, and the best timing for this will vary depending on the specific project and the team involved, as well as the resources you can commit to it. To that end, there are a few general factors to keep in mind when deciding whether or not to implement a new framework into your development cycle: 

  • First, consider whether the new framework offers significant advantages over the current one. If it’s simply a personal preference, it may not be worth the time and effort required to switch frameworks. However, if the new framework offers significant improvements in terms of performance or efficiency, it may be worth considering. 
  • Then think about whether the team is ready and willing to learn a new framework. If team members are resistant to change, it may not be worth force-feeding them a new framework, lest it critically disrupts the development of a product. However, if they’re open to learning something new, adopting a new framework can be an excellent way to keep them engaged and excited about the project. 

So logically, there are downsides to this approach if an organization is constantly selecting new frameworks, negating any advantages that the framework might offer in the long run, especially in a field like software development where innovation and disruption are always moving forward.

Many developers spend lots of time constantly learning the next new framework. There are many existing frameworks, and they move in and out of vogue rapidly. As mobility matures, developers will benefit more from consistent approaches to mobile development as they move across SDKs and frameworks. A consistent approach to security, integration, development, and management enables quality and speed”, are the words of this article on some common myths about software development; although it’s focused on mobile application design, it’s also a bit of good advice for any kind of software work.

So, while it may be tempting to keep trying frameworks to entice new projects, there are some definite advantages to sticking to one specific framework. For starters, using the same framework will help to streamline the development process, since you and your team will already be familiar with the tools and syntax, as well as making it easier to share code between projects, which can be a huge time-saver. And at the very end, using the same framework across multiple projects will give you a better understanding of its strengths and weaknesses, which can help you to develop more efficient and effective code.

But how do you  choose the “best” one?

Is-learning-new-frameworks-beneficial-for-the-development-4

Ultimately, there are several compelling reasons to be consistent with your frameworks during every project, and by doing so, you can enjoy a smoother development process and better code quality. However, different projects and challenges might need different approaches, so selecting a framework that makes sense for your organization requires consideration and care. As starting points, you might want to consider the…

  1. Support: Most frameworks are open-source and community-driven. One with a big pool of developers and engineers contributing to it and a direct line of communication in case of any issues will always be preferable. After all, a framework is as good as the people surrounding it, so if their last update was in 2018, no matter how good a framework might be, sooner or later it can leave you behind the curve.
  2. Security: The more security functions you can add through a software framework, the better, so choosing one that allows you this flexibility already makes it hard to top.
  3. Sustainability: The chosen framework keeps up with the Software Development Lifecycle? If not, then you are not working with a tool with a sustainable future, so selecting something scalable and with enough flexibility might be the best course of action.
  4. Documentation: Linked to the ‘Support’ point above, thorough and well-written documentation of the framework is invaluable to learn it quickly, a critical requirement if you are looking for a new framework that makes upgrades easy to implement.
  5. Outcomes: What does it offer to a client and a final user? Does it allow making progress on a project faster (for a client) while making it easy for feedback to be implemented satisfactorily (for an end user)? How a framework works beyond the development cycle is always an important consideration to make.

Ultimately, however, there’s no perfect answer to this question, and it will vary depending on the specific circumstances of each development cycle. And while there are benefits to using different frameworks for different projects, there is also value in being consistent with one particular framework, like reducing training costs and onboarding for new developers, making it easier to share code between different applications. Most importantly, it can promote greater consistency in the quality of the final products, so if you keep these general considerations in mind, you should be able to decide what’s best for your project and team at every turn.

The Key Takeaways

  • Selecting the correct approach to development can make the difference between a good outcome and a bad one.
  • Frameworks are a great example of this: selecting the correct one for a project can make things easier for everyone involved in development.
  • New frameworks are coming up all the time, so weighting their advantages and disadvantages is critical for any business looking to adopt them.
  • There are lots of reasons why having a consistent set of frameworks might work better in the long run than using whatever new one comes up, in terms of time, investment and money.

Scio is an established Nearshore software development company based in Mexico that specializes in providing high-quality, cost-effective technologies for pioneering tech companies. We have been building and mentoring teams of engineers since 2003 and our experience gives us access not only to the knowledge but also the expertise needed when tackling any project. Get started today by contacting us about your project needs – We have teams available to help you achieve your business goals. Get in contact today!

Mythbusting: Is ‘native development’ always the correct choice when designing an application?

Mythbusting: Is ‘native development’ always the correct choice when designing an application?

Curated by: Sergio A. Martínez

When it comes to creating an app, native development is often seen as the gold standard. After all, native apps are designed specifically for a particular platform, making them more user-friendly and efficient, and allowing developers to create products optimized for specific devices and operating systems. This means that they can take full advantage of the features and capabilities of a specific platform, typically resulting in apps being more responsive and with better performance.

Is-native-development-icono

However, when it comes to mobile development, there are a few different schools of thought about the best way to design an app. While native development is popular, others prefer cross-platform or web development, and each has important advantages and disadvantages that could very well decide the outcome of a project from the very beginning. For that reason, it’s important to choose the right approach, which ultimately depends on the specific needs of the app, and the resources needed to bring it to life. So, today we bring the question: is native development always the best choice? Or does it have some hidden hurdles that could jeopardize a project in the long run? 

Because one thing is sure: with the complexity of today’s development environments, these questions are more difficult to answer than ever, but the correct choice is critical to ensure a positive outcome in any development project. After all, the wrong choice can mean more than a negative outcome (even setting a project back by months or years), so today we want to take a look into native development, the needs of mobile app design, and the pros and cons of choosing either approach, to see if the myth of “native development is always better” holds true or not.

Going native (in app development)

Is-native-development-1

The debate between native and web-based app development has been ongoing for a long time. Seeing how mobile applications are increasing in importance almost daily, pros and cons are thrown around all the time, and the correct choice for a given project depends on a wide range of variables. One key consideration, for example, is the target audience for the app; if the app is being developed for a general consumer audience, then a web-based approach may be more appropriate, because they can be accessed across a wide range of devices, including laptops, tablets, and smartphones. 

In contrast, native apps are typically designed for a specific operating system (such as iOS or Android) and can only be installed on devices that use that OS, likely making native apps less accessible to potential users, with the trade-off that this approach reduces the amount of work a development team needs to do to get the application up and running. With a very specific environment, there’s less room for errors. 

Another important factor to consider is the level of functionality required by the app: if it needs to take advantage of features that are specific to a particular platform (such as GPS or camera), then native development may be the only option. However, if the app can function adequately using web-based technologies, then a cross-platform approach may be more efficient and cost-effective. The correct choice depends, then, entirely on context: 

Software development is a complex process, and there are many decisions that need to be made upfront. Some of these choices are technical in nature, others are more strategic, and still, others are more creative, such as coming up with new features or designing the user interface. With so many things to consider, it’s no wonder that making the right choices is critical for success”, says Adolfo Cruz, PMO Director and Partner at Scio. Unfortunately, there is no easy formula for ensuring that all of the choices you make are correct; it requires a combination of experience, knowledge, and intuition. And in the case of native or web-based development, thinking ahead is critical, in terms of resources and work needed to make them work.

There’s one thing for sure: native apps can be more expensive to develop and maintain. There are a lot of factors that contribute to their high cost, but first and foremost is that you have to design and develop separate versions of your app for each platform (iOS, Android, Windows, etc.) if you want to open your user base after the fact. That means more man-hours spent on development and more money spent on software licenses and other tools; for example, an app for iOS would need to be written in Swift or Objective-C, while an Android app would need to be written in Java or Kotlin, making cross-porting difficult. In addition, each platform has its own set of guidelines and best practices that need to be followed, making the development process more time-consuming and complicated.

In that sense, native app development can (counter-intuitively) be generally more complex than web or hybrid apps, increasing the odds that something will go wrong during development. On one hand, they can be more difficult to scale, as they need to be developed separately for each platform. And on the other hand, native apps can be less flexible than cross-platform or web-based apps, which can be developed using a single codebase and then deployed on multiple platforms. 

Developers often think that it’s easier to strictly focus on building apps with the manufacturer SDKs and getting them to market. Native development has advantages, but without an integrated approach that provides app management, analytics, testing, and back-end integration, native app development has the potential to create more issues, more complexity, and increased spending down the road”, is the analysis of the tech news site SD Times. “If integration isn’t done right the first time, future projects will be delayed, and it will lead to an influx of performance issues that will only lead to more work for the developers and potentially unsatisfied users.

A zero-sum game

If you think that choosing between native development and a hybrid or web-based approach seems to be a zero-sum game, you would be right. After all, there’s no way you make a choice that wouldn’t have a counterweight somewhere during the development process, so careful consideration should be given to your approach to designing a new app. In terms of the needs of a project, we can select four key areas that your team might need to consider before starting a project:

Is-native-development-tabla
  • Resources: The amount of time, money, and man-hours needed to bring the project to fruition. The more platforms, the more resources are needed.
  • Userbase: The number of users a specific app can reach depending on its platform. The more platforms, the bigger the number of users.
  • Functionality: The number of challenges, errors, and bugs a team might need to fix, which become bigger as the number of platforms intended grows. 
  • Future: The more platforms the app is available, the easier the task of keeping it available for longer, not running the risk of getting it “landlocked” in an environment.

Native development, for example, can provide a better user experience, but as we already mentioned, it may be more time-consuming and expensive. A web-based development, in contrast, is faster and generally needs fewer resources but has the risk of offering a subpar experience for some users with the “wrong” kind of device.

And of course, this table doesn’t take into consideration things like specific features needed for the app (which might change the value weight of each of these choices), or more nuanced circumstances in development, like the adoption rate of a determined platform in a specific region, or circumstances outside development (like legal requirements when publishing apps), but it illustrates how this decision might require a careful balance between outcomes. 

It can be tempting to want to develop a native app for every new platform that comes out. After all, native apps tend to provide the best performance and the most seamless user experience”, concludes Adolfo Cruz. However, there are some specific scenarios where native app development just doesn’t make sense, such as if you’re only developing a simple app or if you need to support older devices. In general, it’s important to carefully weigh the pros and cons of native app development before making a decision. Every positive outcome comes from understanding the nuance of development, and ultimately depends on the needs of the project, keeping in mind that each approach has its advantages and disadvantages.

The Key Takeaways

  • Creating software requires making a lot of difficult choices to ensure the success of an app, especially in the current mobile environment.
  • There are a wide variety of approaches to this, but one of the most popular is native development, or designing for a specific, particular, platform.
  • Although native development has lots of advantages, especially on the back end, sometimes these advantages are not enough to counter a web-based approach.
  • Careful consideration of the pros and cons, a clear picture of the direction of the app, and using your resources properly are what will determine the need for native development, but it shouldn’t be treated as a default.

Scio is an established Nearshore software development company based in Mexico that specializes in providing high-quality, cost-effective technologies for pioneering tech companies. We have been building and mentoring teams of engineers since 2003 and our experience gives us access not only to the knowledge but also the expertise needed when tackling any project. Get started today by contacting us about your project needs – We have teams available to help you achieve your business goals. Get in contact today!

Quiet Quitting: Myths, facts, and misunderstandings about a new reality of working

Quiet Quitting: Myths, facts, and misunderstandings about a new reality of working

Curated by: Sergio A. Martínez

What is the future of work? That is a question that virtually every organization, in both the private and public sectors, from software to manufacturing to service and everything in between, has been asking themselves since the onset of the pandemic in 2020. Agreeing on an opinion seems to be impossible, but what we are sure about is that our idea of “work” has changed dramatically, with new ideas, models, and philosophies getting discussed every day.

Quiet Quitting: Myths, facts, and misunderstandings about a new reality of working

“Quiet quitting” is one such concept. After this term got popular on social media in 2022, the underlying meaning of “quiet quitting” started to elicit all kinds of opinions about what it means, going from those who see it favorably to those who see it as the norm (and nothing revolutionary), to even those against this attitude for a diversity of reasons. For those not in the know, “quiet quitting” means “performing the strict minimum requirements of a job within the allotted work hours”, a philosophy gaining supporters across all industries and with all kinds of workers and collaborators. And getting to the root of this line of thinking is not difficult to do. 

People are tired of being stifled by leaders who don’t trust or value them. If there’s no freedom to take a risk without fear of being punished for a bad result, then why take a risk? If there’s no acknowledgment of their capacity and no opportunity to contribute their full value, then why would they want to do more?”, says the analysis of Forbes Magazine in their article “The Cure For ‘Quiet Quitting’: Humanize Work”, which takes a look at the current job landscape and the factors that might push a worker into this mindset.

After all, it’s no secret that the current job market is becoming increasingly competitive, and people are finding it harder to get the jobs they want. At the same time, jobs are becoming more demanding, with some employers increasingly expecting employees to work longer hours for less pay, which not only causes a lot of stress and unhappiness among workers but also pushes them to question whether work is really worth it. Some people are even choosing to opt out of the traditional workforce altogether in favor of a more flexible lifestyle. This, in turn, is creating severe shortages in many fields that, with our current trajectory, will cause a lot of problems that will only continue to grow. 

I think this is old behavior under a new name and has always existed to some degree, but now it has a name”, continues Helen about the origins of quiet quitting. “It used to be a lot more common in other areas (for example, the public sector), where you could stop working at a certain hour and not have to worry about it. But in the software development industry, this issue is a lot more complex. The issue is how to measure the effectiveness and productivity of a team member. It’s easy to see someone who answers emails or does things outside of work hours as a good employee, but I don’t agree with that either. You are not giving your collaborators a complete work-life balance.

The numbers don’t lie; according to the online publication Axios, “82% of Gen Zers say the idea of doing the minimum required to keep their jobs is pretty or extremely appealing”, and a good portion of them are already committing to that, bringing back the idea of “working to live” instead of the other way around, putting priorities like family, friends and even hobbies ahead of work as the norm.

Finding the right angle for an old challenge

Quiet Quitting: Myths, facts, and misunderstandings about a new reality of working

The thing about ‘quiet quitting’ is that it doesn’t describe a specific phenomenon, but many different situations with their own context. Maybe you are an effective person within your working hours, and not being available after you shut down your computer doesn’t mean you are not an engaged collaborator, delivering on time”, expresses Helena Matamoros, Head of Human Capital T Scio, about the increasing popularity of this term. “After all, it’s easy to see when a person is actually “quiet-quitting”; they miss deadlines, they are often unavailable during work hours, their emails go unanswered, they appear disengaged during meetings, or they don’t take advantage of anything extra the company offers, like social meetings or training courses. And even then, that attitude can sometimes be the result of burnout instead of active disinterest. Is a complex situation that the name ‘quiet quitting’ doesn’t completely describe.” 

The thing is that, when trying to separate a good collaborator from a not-so-good one, past strategies don’t work anymore. In the old days, the traditional workplace was all about face time and being physically present in the office, but with the rise of technology, that’s no longer the case; good employees cannot be judged by how many hours they’re putting in at the office, but rather by the results they’re achieving. This can lead us to some myths about what an engaged employee is, harming more than helping engagement within the workplace: 

  • First, good employees are always available.

    As already discussed, with email and instant messaging, it’s expected that employees will be available outside of normal working hours. But that doesn’t mean those good employees are always glued to their devices. They know how to strike a balance between work and life, and they know when to unplug and take a break.

  • Second, good employees prioritize work above everything else.

    Many people still believe that employees should put their jobs ahead of any other priorities, even if it means sacrificing their well-being. However, a smart workplace knows that employees thrive when they feel they are valued members of a team, and companies should focus on creating an environment where employees can have a good balance and feel supported and appreciated. 

  • Third, good employees are always hyper-focused.

    When it comes to working, it’s often seen as a good thing to be hyper-focused, with the ability to laser in on a task and get it done quickly and efficiently is generally viewed as a positive trait. But contrary to popular belief, employees who take breaks during the workday, or take time to socialize, are more productive than those who don’t. Likewise, employees who telecommute or work flexible hours are just as productive as those who work traditional nine-to-five schedules. In the end, it depends on the person and the rhythm they need to achieve good results.

Seeing it from both sides, the employee and the employer, it all comes down to having a clear work culture within the organization that everybody can understand and adopt”, explains Helen, referencing how Scio tries to be flexible and offer resources to keep their collaborators as far from burnout or disengagement as possible, especially important when our company collaborates with remote developers and engineers from all over Latin America. If you know what is expected of you, and what is acceptable or not for the company, it’s easier to identify if you are dealing with someone practicing quiet quitting. In the end, there is no one-size-fits-all solution, but starting by debunking outdated myths and practices, any company can create an environment that is tailored to the needs of their employees.

Pros and cons to both sides of the argument regarding “quiet quitting” remain relevant, however. On one hand, working strictly within your limits can help you to avoid burnout and to maintain a healthy work-life balance. On the other hand, it can also make you appear inflexible and unresponsive to the needs of the employer. And while in some cases working longer hours can help you to get ahead in your career, it can also lead to exhaustion and poor health, which could make such an effort too costly. Ultimately, what we can conclude is that this attitude is not something new, but its popularity is a symptom that flexibility and balance in the workplace are more important and appreciated than ever, and any company that supports and understands its collaborators doesn’t need much else to keep an engaged, productive, and motivated team always ready to give their all.

The Key Takeaways

  • The term “quiet quitting”, while popular in social media, is not a new phenomenon, although it can be taken as a symptom of a larger issue.
  • The main issue is that the term “quiet quitting” falls short when describing the wide range of attitudes and practices that come with working.
  • What it points out is the increasing need to keep a better work-life balance, and quiet quitting and burnout can be the result of a lacking workplace.
  • What really matters is the outcome achieved by every individual worker; with the correct support, keeping a collaborator engaged and motivated is far less difficult.

Scio is an established Nearshore software development company based in Mexico that specializes in providing high-quality, cost-effective technologies for pioneering tech companies. We have been building and mentoring teams of engineers since 2003 and our experience gives us access not only to the knowledge but also the expertise needed when tackling any project. Get started today by contacting us about your project needs – We have teams available to help you achieve your business goals. Get in contact today!

The challenge of working smarter: Cognitive inertia and software development. 

The challenge of working smarter: Cognitive inertia and software development. 

By Scio Team

Whether you are coding software or managing a company that creates software, the name of the game is optimization: there’s always a better way to do things, a wrinkle to iron out, a bump to flatten quickly. However, even if we somehow reach a perfect process, it will probably not last long. Technology is always moving forward.

Then why is it often difficult to adjust your development practices to ensure you always obtain a better outcome? Why is it so hard to leave behind “tried and true” methods of development to try new ideas to better the efficiency of any process?

It’s not surprising to find out that the root of a lot of these issues lay within human psychology, specifically a phenomenon that can help us understand how we conceive our practices, and the sooner we can work towards mastering how it works, the better our outcomes will become: cognitive inertia.

The human side of change.

“Cognitive inertia” is a term gaining popularity in software development, and with a good reason: it aptly describes why it might be so hard to change approaches to development, even in the face of an evident need of trying something else:

Changing management is an age-old problem; migrating to a new process with new technologies can represent a big change. The management teams are met with cognitive inertia and a long list of reasons why new methods and technologies will not work. So, instead, they work harder, and the harder they work, the farther behind they get”, points out Barry Hutt, CRO at Viviota Software, in his post “Cognitive Inertia a great challenge to innovation.

It’s a paradoxical outcome, but to begin to understand this issue, we should define clearly what “cognitive inertia” is. Cognitive inertia is not “belief perseverance”, or the phenomenon of maintaining the same belief even when presented with new information. Instead, cognitive inertia is the inability to change how a person processes information, which is a much more complicated framework that involves motivation, emotion, and developmental factors. 

Its consequences can be seen easily in software development when we think of practices like testing or brainstorming, which makes the old adage “work smarter, not harder” a difficult one to implement, especially as a project or an organization grows in complexity.

Cognitive inertia evolved because the brain optimizes energy efficiency. If you keep the same behavior and don’t question it, your brain conserves space and can make faster, simpler decisions. On a social level, maintaining consistent behavior preserves social cohesion by maintaining social roles and responses”, explains the blog “Cognitive Inertia: The Status Quo Bias” by Joseph Adebisi.

However, it’s obvious why in a field like software development this can bring problems in the long run. After all, even if roles in a development team are clearly defined, the multitude of solutions that need to be reached at every step of the process (from the ultimate goal of the client to fixing the smallest of bugs) benefit from the creativity that surges from having multiple approaches.

The key to collaboration

The challenge of working smarter Cognitive inertia and software development.

The approach of Scio to this issue, both internally and in the work, we do with our clients, is knowing that a “solution” is more than having the seemingly right answer for everything; it is developing a process that lets you question and rework the methods you used to arrive at to fine-tune the outcome.

When you build walls, it’s easy to keep piling bricks on, one after another, in every house you build. That might work for a while, but if now you are looking to build something with a different purpose, like a cathedral or a hospital, will that approach still be the best?” comments Luis Aburto, CEO, and Co-Founder of Scio. “What happens when you partner with someone that comes and says ‘hey, maybe this bricklaying will not support the multiple stories we need for a hospital, so what if we try this instead?’

A culture of constant sharing through collaboration, then, might be a way to avoid the pitfalls of cognitive inertia. After all, cognitive inertia, as real inertia does, keeps the same trajectory if nothing initiates a change, so the more different perspectives you have, the stronger the final product may be.

Human beings love to help. Doing it productively and seeing people overcome obstacles it’s a very rewarding experience, and at a company like Scio, where collaboration is a key part of us, you also get the benefit of cross-pollinating different parts of your organization”, says Yamila Solari, Co-Founder and Coaching Leader at Scio. “If you create a culture of mutual help and support, where one person talks to another and so on, your culture is always enriching itself.”

This makes coaching one of the best tools Scio has to keep our organization moving forward, making sure that knowledge gets shared around between collaborators to strengthen the outcomes of every person and every team. This circles back to our earlier article about outputs and outcomes in software development, where we try to understand the purpose and goals of any project we collaborate with before deciding on the approach that will work best for that specific job. Sometimes laying bricks in the usual way will be enough, but that doesn’t mean that developers shouldn’t have an open mind to try new things if they hit a snag during development.

Cognitive inertia in the day-to-day

The challenge of working smarter Cognitive inertia and software development.

However, one should not assume that the issue of cognitive inertia only affects an organization at the macro level, or that it is always a bad thing; it’s part of our daily work whether we notice it or not. For example: if you are focused on a task, and an interruption comes (be it a software update, an Internet outage, an unforeseen meeting, or even a coworker just stopping by to ask something), how difficult is it for you to resume your rhythm at full speed? 

Martin Cracauer, CTO of the software development firm Third Law, holds the opinion that the way our brain absorbs information and uses it in the short term is a form of cognitive inertia, and keeping information properly compartmentalized is a way to ensure a task, or a whole project, doesn’t get derailed:

A lunch break absorbing lots of information that has nothing to do with your work task is relaxing because it does not compete with the work task memory. But a work meeting that touches actual work stuff competes for the same cognitive machinery. […] Your Company makes its money on the programming tasks that are completed today, so you just traded away the brain state needed for Today’s Task in favor of some imaginary later benefit.

What this means is that some form of cognitive inertia (the one that puts a developer “in the zone” when writing code) can be used to the advantage of the development cycle if we structure the project with clear goals and purposes that need minimal interruptions, and let the developer to fully focus in the day to day progress. 

The Agile methodologies, when well implemented, help with this as it lets organizations like Scio maintain a high level of cohesion in the development cycle that doesn’t give enough space for distractions. A well-managed team knows its goals, the potential pitfalls, and biases that can surge in development, and has the support to focus on the tasks that actually get things done, letting the outcome dictate everything else the product might need.

Cognitive inertia, then, is not inherently a good or bad thing in software development; a well-balanced organization can manage, and even use it to its advantage. After all, the software is not about working harder, it’s about implementing the smartest approach and letting the results speak for themselves.

The Key Takeaways:

  1. Cognitive inertia is not stubbornness, it’s the way some people get used to processing information in their day today.
  2. Changing this inertia can be difficult, but is not an insurmountable problem, and it’s a critical need for software development.
  3. Collaboration and tools like coaching can be effective in mitigating the effects of cognitive inertia, feeding constant new information to avoid settling on a single approach.
  4. However, cognitive inertia is not all bad; it helps a developer to focus as long as interruptions and problems derived from sudden changes of course are avoided.
  5. It all comes down to good management. Being aware of bias and cognitive traps, constantly encouraging new knowledge between collaborators, coaching and a good implementation of Agile methodologies can result in a healthy development environment that guarantees a good outcome.
Mythbusting:Are introverts better programmers?

Mythbusting:Are introverts better programmers?

There aren’t many professions without a stereotype attached, and programming is sure among them. But are these ideas about the personality of programmers accurate, or are we missing something else? Let’s look into these old myths, and see if they hold up. 

By Scio Team

When we think about programming and software, we tend to conjure a specific image in our minds, a stereotype that has accompanied the profession almost since the beginning: the image of a coder hacking away at the keyboard, immersed in a world of their own, without the need of much company. 

However, if this was true at some point, it still is? The stereotype of the introverted programmer is an even mix of fact and myth, and here at Scio, where we know perfectly the talent we work with, we want to shed some light on the reality of people applying a special skill to create software.

Is it possible to profile a personality?

Since the days of the classic “Temperament Theory”, which tried to divide people into 4 distinct types (namely: Sanguine, Choleric, Melancholic, and Phlegmatic, which are pretty weird classifications if we are being honest), people had the impulse to try and understand their personalities, where they come from and how they affect their everyday lives.

More scientific approaches to these questions have evolved from the 20th century onwards, and today we understand that personality, affinities, and preferences are more fluid and flexible than we once thought, even if we simplify the whole idea for the sake of practicality.

The Myers-Briggs Type Indicator nowadays is one of the most popular systems to tackle this subject, going more in-depth on the inner workings of a person instead of just focusing on their outward behavior.

Going back to the idea of programmers as introverts, things like the MBTI bring some very interesting insights about this professional field and the people who feel compelled to it. What can we find there?

Let’s define “introversion”

What you need to know right now, is that the “introvert/extrovert” dichotomy is a little outdated, simplifying a vast swath of personality types into two neat boxes with little in-between. What the definition of “introversion” tries to convey under this understanding, is people who don’t have much affinity for a specific kind of social interactions, prefer more individual activities, or with a pretty select group of people. 

Although many probably feel this way, reducing it to only these signifiers leaves a lot out. What the Myer-Briggs does is check the balance between the following:

  • Extraversion (E) versus introversion (I)
  • Sensing (S) versus intuition (N)
  • Thinking (T) versus feeling (F)
  • Judging (J) versus perceiving (P)

What this system maps out is the preference of the person, rather than the ability, so the metrics here assign percentages based on what a person would prefer to do in a given situation, ending up with a combination of 4 letters based on their highest percentages, like INTP or ESTJ. Please take note of the use of the word “extraversion” instead of “extroversion”, which will be important in a minute.

There are pros and cons to this approach, but the important part here is that we have a lot of historical data to see what large swathes of the population prefer, and in programming, the results are pretty interesting overall, challenging many of our notions about the “introvert coder” stereotype.

So… are programmers introverted or not?

We are getting there. First of all, since we are looking into preference instead of abilities, it’s important to note that certain groups, as a whole, will pick one instead of the other; it’s a decision (even if a subconscious one) instead of instinct, or impulse. For programmers, this preference goes towards Thinking (T) instead of Feeling (F), meaning that they like to analyze situations from a more objective point of view, not giving as much consideration to the emotional side of things

Now, this doesn’t mean they only do one of these things. It means that when compelled to act, people will feel more comfortable with a single approach, so if we look at coding, programming, or engineering (where you see lots of interconnected mechanisms balanced between “needs” and “wants”) people prefer Thinking (T) will be better at it. This post, titled “Does being an introvert make you a better coder?” puts it nicely:

A typical software developer likes there to be a logical consistency behind a decision. It might not matter much what that consistency is, so long as it’s there. By contrast, other people prefer the ‘feel’ of the situation, using empathy and imagining what it is like more from other people’s point of view. In other words, there is a difference between coders and others, in how they tend to justify a decision.

And as you can see, this has nothing to do with social preferences, or the ability to relate to people in any situation. That’s why this profiling system uses the word “extraversion”, referring to “the world of action, people, and things”, in contrast with “introversion”, or the world of ideas and reflection, both useful for doing complex things such as programming software.

MBTI introverts prefer fewer, deeper, and more involved interactions with people, whereas extraverts prefer shorter and more frequent interaction. For getting to know users quickly, extraversion can be an advantage, but introverts are perfectly good at deep social interaction”, goes the cited blog. And it’s true; avoiding people has little to do with introversion, and the stereotype comes from misunderstanding what these words try to convey.

An alternative definition of the “introverted programmer”

So, to wrap things up, where does this leave the myth? As we said, maybe at some point in the past, before the development of agile methodologies or the normalization of a remote model of working, the stereotype of the “introverted programmer” was true and functional, but it no longer works that way.

People are more complicated than many of these systems will tell you, and lots of different preferences and abilities are desirable in any well-balanced team. What is true in the age of remote work, though, is that knowing how to interact and communicate well with your coworkers, clients, and managers at a distance are going to be a very valuable skill moving forward, and this has nothing to do with how one approaches the challenges of programming.

So we can leave behind all that and start thinking of the people best adapted to the work of programming in a different way; is no longer an introverted programmer, but a thinking one, whose intuition and affinity for code can be supplemented in a great way by social understanding and the clear communication that only the best Nearshore companies can offer.