Articles

Filters & sorting
Articles in the database: 47

Six months with RamBase: Mitac’s journey from go-live to results

Six months ago, answering a customer’s order inquiry could take Angelina Persson ten minutes or more, switching between Excel files, internal systems, and production schedules to piece together a complete answer. Today, as Chief Operating Officer at Mitac, she has the same information at her fingertips in seconds. The difference? RamBase Cloud ERP “Everything we need is now in one place,” says Persson. “Real-time data, complete traceability, instant answers. It’s transformed how we operate.” For the 40-person operation, the six months since going live have been about discovering what’s possible when information flows freely and everyone works from the same source of truth. This is their honest, unfiltered account of what actually happens in the first six months after flipping the switch. The reality check: What actually changed? From guesswork to certainty Before RamBase, Mitac had no unified business system. Finding information meant hunting through spreadsheets, emails, and local files. The impact touched every department: Production: “The difference for me is enormous,” says Sanna Virtanen, Production Manager. “Before, I was working with estimates and assumptions. Now I can follow the entire production flow from delivery to delivery and see reality as it actually looks.” This shift from estimation to reality changes everything. Sanna can now enter all necessary information, control exactly who sees what, and track dependencies across the entire production process. “It’s fantastic,” she says. “Instead of constantly searching for information in multiple places, you now find it readily available.” The transparency has even changed how Mitac works with customers. “It forces all of us to become more structured,” Sanna adds. “And it forces customers to become more structured too.” Sales & Purchasing: “I have control over things in a way I never did before,” says Adam Bergqvist. “I can follow the flow and clearly see if something needs to be back-ordered from a supplier. Before, it was ‘yeah, I think we have it’, more guesswork. Now I can see it directly in RamBase instead of logging into several systems or emailing suppliers to ask. We don’t need to think we know anymore. Now we know.” This certainty eliminates an entire category of problems: promising delivery dates based on assumptions, scrambling when inventory didn’t match expectations, and the downstream effects on customer trust and production schedules. Time tracking gets real One of the most tangible improvements came in time registration. Before, operators filled out paper forms without really knowing how long they’d actually spent on each part. Today, it’s measurable with RamBase. “It makes it much easier to follow up and set requirements on how long tasks should take,” says Angelina. “No more guessing, just data.” This shift enables accurate job costing, reveals previously invisible bottlenecks, and provides the foundation for continuous improvement. When you know how long things actually take, you can make informed decisions about pricing, capacity, and process optimization. Complete traceability transforms quality The system provides end-to-end visibility: when something was purchased, when it arrived, what it’s used in, and where it is in production. “We can trace everything now,” explains Angelina. “If there’s a quality issue, we can find the source quickly. And we can see patterns that might indicate problems before they become major incidents. That kind of visibility was impossible before.” The cross-departmental transparency means fewer surprises. Production can now clearly see when all components have arrived and when they haven’t, eliminating miscommunications that used to be routine. “Those ‘I didn’t know that component wasn’t coming’ situations are increasingly rare,” says Sanna. The implementation philosophy: Confidence before speed One of the most striking aspects of Mitac’s journey is their deliberate pacing. “The implementation has been a transparent process,” says Angelina. “We’ve trained departments little by little, giving everyone time to participate, learn, and become comfortable. We moved forward slowly and deliberately—it’s been a safe and good way for everyone in the organization to adopt the new system.” This phased approach created several advantages: Reduced resistance when people aren’t overwhelmed Early wins that built momentum and confidence Organizational learning that allowed discovery of optimal workflows Sustainable change embedded in daily operations For companies considering similar transitions, Mitac’s approach offers a valuable counterpoint to rushed implementations. Speed matters less than sustainability. What’s next: Growing into the system Six months in, Mitac is candid about being early in the journey. They’re working on extracting comprehensive KPIs around specific efficiency gains and error reduction metrics. Those numbers will come. But the qualitative improvements are already clear: Decisions happen faster because data is immediately accessible Miscommunications have decreased through shared visibility Inventory accuracy has improved from guesswork to certainty Customer service has strengthened through faster, more accurate responses Perhaps most importantly, Mitac invested in RamBase early in their growth trajectory, before inefficiency became crisis. “We didn’t wait until things were broken,” reflects Angelina. “This positions us to scale with confidence as demand increases, rather than playing catch-up with infrastructure.”
Six-months-with-RamBase-Mitac_s-journey-from-go-live-to-results

WMS for E-Commerce in the USA

In many US e-commerce businesses, the warehouse starts as a “small operations problem.” Someone prints pick lists. A few shelves become a few aisles. Inventory lives inside the ERP, and for a while it works. Then growth hits. Orders spike, returns pile up, new SKUs arrive weekly, and the warehouse becomes the heartbeat of customer experience. That is usually the moment teams realize a hard truth: ERP inventory can be great for accounting, but it is rarely built to run a fast, high-variance warehouse. A WMS is not about adding another system for fun. Done well, it is about protecting margin, improving speed, and reducing errors when e-commerce complexity starts to outgrow ERP logic. When ERP stops being enough in an e-commerce warehouse ERP inventory typically answers questions finance cares about: what do we have, what is it worth, what was received, what was shipped. A warehouse needs to answer different questions in real time: where is it, who should pick it, what is the fastest path, what do we do with returns, what is the priority right now. You are likely outgrowing ERP warehouse functionality if you recognize these patterns: You ship late even when you have inventory. Picks are correct “most of the time,” but returns and reships are rising. Your team relies on tribal knowledge and the best person in the building. New hires take too long to become productive. Inventory accuracy looks fine on paper, but customer support keeps hearing “it said in stock.” The painful part is that these problems compound. A small increase in errors can create a large increase in labor, because every mistake creates extra touches: searching, re-picking, re-packing, re-labeling, responding to customers, and reconciling financial impacts. At that point, the warehouse is not just fulfilling orders. It is bleeding time. What a WMS actually changes in daily operations A good WMS is not primarily a dashboard. It is execution logic for the floor. It assigns work, sequences tasks, and reduces decision making at the shelf level. In e-commerce, the most meaningful WMS capabilities are simple in concept but powerful in outcome: First, it gives you location-level control. Not “we have 200 units,” but “we have 12 units in bin A-03, 8 in B-11, and 5 in returns quarantine.” Second, it supports directed putaway and replenishment. That means your best locations stay stocked without constant human guesswork. Third, it enables optimized picking. Batch picking, wave picking, zone picking, pick path guidance, and real time task assignment can reduce travel time and cut errors. Fourth, it improves packing and shipping accuracy. Weight checks, scan validation, and integration with shipping tools reduce the “wrong item, wrong label” spiral. Fifth, it makes returns operational, not emotional. Returns are where many e-commerce warehouses lose control because every return is an exception. A WMS can standardize intake, inspection, disposition, and restock logic. This is the key shift: the warehouse stops being a place where people “figure it out” and becomes a place where processes run. Integration in the US e-commerce stack: what must connect A WMS does not replace your ERP. It complements it. But value depends on clean data flow. In most US e-commerce environments, the WMS needs stable connections to: order sources (Shopify, BigCommerce, marketplaces, OMS) ERP for financial posting and item master ownership shipping systems (rate shopping, labels, carrier compliance) returns workflow tools (if you use one) BI layer for cross-functional reporting The practical advice here is simple: decide where the system of record is for each data set. Item master, customer master, inventory, and order status should not be “owned by everyone.” Split ownership clearly, then design integrations around that reality. The KPIs that prove a WMS is worth it A WMS project becomes messy when you cannot prove improvement. “It feels better” is not a KPI. The best approach is to measure a baseline before go live and compare against the same definitions after. Here are the KPIs that matter most for US e-commerce operations. You do not need all of them. Pick a focused set. Order cycle time – Measure from order release to shipment confirmation. This shows operational speed, not just staffing. On-time ship rate -The percentage of orders shipped within your promised window. This connects warehouse execution to customer satisfaction and marketplace metrics. Pick accurac – Track errors per 1,000 order lines or per 1,000 units. Accuracy should improve even during peak periods, not only in calm weeks. Units per labor hou -A clean productivity metric. Useful for capacity planning and peak season staffing. Inventory accuracy – Compare system inventory to cycle counts. Include location accuracy, not only total quantity. Out of stock caused by miscount -This is the painful one. It measures how often your inventory system “lies” and triggers lost sales or cancellations. Returns processing time – From returns receipt to final disposition. In e-commerce, returns are a workflow, not a side task Touches per order – How many times a human touches an order line. A WMS should reduce touches, not add them. A strong WMS implementation usually improves accuracy and speed first. Then it improves labor efficiency. That sequence matters, because efficiency without accuracy simply scales mistakes. The most common WMS mistake in e-commerce The biggest mistake is buying a WMS based on feature lists and forgetting the warehouse reality. A system that looks perfect in a demo can fail in a building with your SKU profile, your packaging constraints, your seasonality, and your labor model. The second mistake is over-customizing in phase one. Many warehouses need consistency more than creativity. Start with a stable core: receiving, putaway, picking, packing, shipping, returns. Then optimize. And one more issue that US teams often underestimate: change management. A WMS changes how people work. If supervisors do not buy into it, floor adoption will lag. The system will be blamed for human resistance. When a WMS is not the answer Sometimes the right move is not a full WMS. If your volume is low, your operation is simple, and your primary pain is sales forecasting or procurement planning, a WMS may be overkill. But if your customer experience is defined by fast shipping, accurate orders, and smooth returns, the warehouse is your competitive edge. In that case, a WMS is not “extra.” It is infrastructure for growth. Closing thought In US e-commerce, the warehouse is where margin is won or lost. When ERP inventory stops being enough, a WMS becomes the system that turns chaos into process. If you want a quick self-check, ask two questions: Can we trust our available-to-promise inventory in real time? Can a new hire pick accurately within a week? If the honest answer is no, you are not just missing software. You are missing warehouse execution logic.
WMS-for-E-Commerce-in-the-USA

How to Choose an ERP System in the USA in 2026

There is a moment in almost every ERP project when someone says, “We just need a system that can do everything.” It sounds reasonable. It is also how companies end up with an expensive platform that fits no one particularly well. ERP selection in the USA in 2026 is not about finding the most famous brand or the longest feature list. It is about making a decision that will shape how your business behaves every day. Your month end close. Your order to cash cycle. Your inventory accuracy. Your ability to answer a simple executive question without three spreadsheets and a prayer. And if you are honest, that is what an ERP is really about: turning daily operations into repeatable outcomes. The problem with the way most teams buy ERP Most ERP buying processes start in the wrong place. They start with software. Demos. Modules. Licenses. A timeline that looks clean because it has to look clean. Then reality shows up: messy master data, unclear process ownership, integration assumptions, and the biggest surprise of all, the realization that people do not change just because a new system arrived. If your ERP project fails, it rarely fails because the system did not have a specific function. It fails because the business was not ready to run in a new way. Or because the vendor relationship quietly shifted from partnership to dependency. This is why “ERP selection” is a misleading phrase. You are not selecting software. You are selecting the operating logic that will run your company. 2026 changed the conversation in the US The US market is more disciplined now. CFOs demand predictable cost. Operations leaders demand measurable impact. IT leaders demand clarity on security and integration. And everyone, whether they say it out loud or not, wants the same thing: freedom to adapt later. That last point matters more than most teams admit. Because ERP is a long decision. It sits under your most important processes. Once you build around it, switching is painful. So the real question becomes less romantic: do you have a plan for change, or are you betting your business on permanence? In 2026, the smartest teams are not just asking “Will this ERP work?” They are asking “What happens if we need to leave?” The ERP questions that actually predict success Here is what an expert buyer does differently. They ask questions that expose the project reality, not the sales narrative. First, they ask how the vendor defines a successful first 90 days. Not in slogans, but in concrete deliverables. If a vendor cannot describe early value, you are probably buying a long, expensive promise. Second, they force clarity on scope. Every ERP implementation is a tradeoff between speed and perfection. A mature vendor will draw a line around phase one and defend it. An immature vendor will say yes to everything and charge you later in change requests. Third, they talk about data early. ERP is a mirror. It reflects the quality of your item master, customer master, vendor master, chart of accounts, and process rules. If your data is inconsistent, the system will not fix it. It will scale the inconsistency. Fourth, they ask who owns process decisions. This is a silent killer. If the business thinks IT owns the project, and IT thinks the business owns the process, nothing gets decided. And when nothing gets decided, the default outcome is always the same: you rebuild the old process inside the new tool and call it transformation. Finally, they ask about integration assumptions and total cost, not just license cost. In the US mid market, the ERP rarely lives alone. There is payroll, banking, tax, e commerce, EDI, WMS, BI, CRM, shipping, and sometimes a patchwork of legacy tools that never fully went away. Every integration is a maintenance relationship. You should buy that relationship with open eyes. The contract is where the future gets locked in If you want to know where ERP risk hides, look at the contract. Not just the price. Bad contracts do two things. They keep scope vague, and they keep responsibilities blurry. That combination is the perfect recipe for budget creep. When scope is unclear, everything becomes a negotiation. When responsibilities are unclear, every problem becomes “not included.” There is another category of risk that matters more in 2026 than it did in the past: exit readiness. Many teams assume that if they own their data, leaving will be easy. That assumption is dangerous. In practice, the pain is not “getting a file.” The pain is getting the data in a form that preserves relationships, history, and logic so another system can actually use it. If a vendor cannot explain, in plain terms, how you export your data, how long it takes, and what it costs, you do not have an exit plan. You have wishful thinking. Buy an ERP like you buy risk management The most professional way to evaluate ERP vendors is to stop treating the decision as a beauty contest and start treating it as risk management. Look at fit to your core processes first. Not every process, only the ones that determine your margins and your customer experience. Then judge the implementation approach. Does the vendor bring a credible path from today to go live, or do they hide behind “best practices” without making hard decisions? Then assess the boring but decisive elements: data readiness, integration clarity, support model, upgrade path, and the vendor’s willingness to talk about failure modes. Strong vendors can explain why projects fail and what they do to prevent that. Weak vendors pretend failure is someone else’s problem. A closing note for leaders The most expensive ERP mistake is not choosing the wrong software. It is choosing without clarity on outcomes, ownership, and exit. If you want one executive test before you sign: ask the vendor to describe, in plain language, what your company will be able to do better in 90 days, and what happens if you decide to leave in three years. If they cannot answer both without dodging, keep looking. Because in 2026, the best ERP is not the one that can do everything. It is the one that helps your business do the right things consistently, and keeps you free to change when the business demands it.
How-to-Choose-an-ERP-System-in-the-USA-in-2026

Cloud vs On-Prem: The Real Question Is Exit Cost, Not Ideology

For years, the “cloud vs on-prem” debate has sounded like a team sport. One side talks about speed and scale. The other talks about control and predictability. In real companies, though, the decision is rarely philosophical. It is operational. The most useful question is not “Should we move to the cloud?”It is “Which workloads belong where, and what is our plan if we need to change direction?” Because the painful part is usually not the move in. It is the move out. Why companies are revisiting cloud decisions A few years ago, many cloud migrations were driven by urgency and optimism. Today the conversation is more sober, and for good reasons: Cost pressure is sharper. CFOs now scrutinize cloud bills the way they scrutinize payroll. The market matured. The “wow factor” wore off, and outcomes matter more than narratives. AI raised the stakes. Data is no longer just stored. It is processed, enriched, and reused, which makes control and governance harder. None of this means cloud is “bad.” It means the bar is higher. You need a plan, not a trend. The three cloud models and the three types of lock-in A big reason cloud decisions get messy is that we say “cloud” as if it were one thing. In practice, there are three models, each with a different kind of dependency. SaaSFastest time to value. Also the highest risk that your processes and data become tightly coupled to one vendor’s way of working. PaaSGreat developer experience, great managed services, and often the strongest platform lock-in. The more you lean into proprietary services, the harder it is to lift and shift later. IaaSMost flexible and usually the most portable. But it requires discipline: architecture, operations, security hygiene, cost governance. A simple rule: there is no universally best model. There is only a best fit for a specific workload and a specific risk tolerance. Where TCO surprises show up in organizations Most teams budget for the obvious: licenses, usage, and some migration effort. The surprises tend to come from everything that sits around the workload. Common TCO blind spots: Data movement and egress. Moving data out can become a real budget line, not a footnote. Observability and security tooling. Logging, monitoring, SIEM, vulnerability scanning add up quickly. Backups and retention. Especially when compliance and long retention periods enter the picture. Integrations. The cost is not only building them, but maintaining them as systems evolve. People time. Cloud success requires skills, and skills require time, training, and focus. If you want one sentence for executives: TCO is not the subscription. TCO is the subscription plus the operational reality. Security: the shared responsibility gap In the US market, “cloud is secure” is both true and misleading. The infrastructure may be secure, but many incidents come from configuration, access, and process. Security is not a location. It is a practice. The practical issues that cause pain are consistent: Who owns identity and access management end to end How secrets are stored and rotated Whether backups are tested and restorable, not just “enabled” How segmentation is enforced between environments and workloads Whether incident response is rehearsed, not just documented A strong cloud provider can give you solid foundations. It cannot replace your governance. Cloud repatriation: the moment the bill comes due In the US, more leaders are talking about cloud repatriation, moving some workloads back to private infrastructure, colocation, or hybrid setups. The reason is rarely “cloud failed.” The reason is usually one of these: Predictability. Costs and performance become easier to forecast.Control. Data, latency, and operational autonomy matter more at scale.Exit pressure. Mergers, vendor changes, compliance, or strategy shifts demand flexibility. The hardest part is often data portability. Many companies discover too late that “export” can mean “a pile of flat files” instead of a usable, relational dataset with history, relationships, and context. This is where vendor lock-in becomes real. Not as a buzzword, but as weeks of mapping, rebuilding, validating, and explaining discrepancies to the business. If your exit plan is “we will figure it out later,” you do not have a plan. You have a risk. A simple decision framework that avoids the religious war Instead of picking a side, evaluate each workload through a short lens: Business criticalityIf downtime stops revenue, you need stronger control and stronger recovery plans. Data sensitivity and complianceNot every dataset carries the same regulatory and reputational weight. Performance and latency needsSome workloads tolerate distance. Others do not. Team capabilityA model that looks perfect on paper fails fast without the right skills. Lock-in toleranceSaaS can be great, but only if the exit path is clear and contractually protected. Exit cost and timeAsk for numbers. Ask for timelines. Ask for formats. Ask for responsibilities. Many US enterprises land on hybrid not because it is trendy, but because it is a balanced risk portfolio. The questions to ask before you sign anything If you want to protect sovereignty and optionality, ask these questions early: How do we retrieve our data in a usable form, including relationships, history, and attachments What is the timeframe and SLA for data export after termination What are the egress costs and how do they scale with volume Can we run the system in our own environment, or is it permanently tied to the vendor What does disaster recovery look like, and how often is it tested What happens if pricing changes or a product feature is deprecated These are not “difficult questions.” They are executive questions. They separate a smart purchase from a long dependency.
Cloud-vs-On-Prem-The-Real-Question-Is-Exit-Cost_-Not-Ideology

Reaching for the Sky and Beyond: The New Digital Mandate for Commercial Aerospace in 2026 

Faced with rising cybersecurity threats and persistent supply chain shortages, airlines and regulators alike are demanding a new level of digital resilience. In 2026, success will hinge on closing critical security gaps and more outside the box digital thinking on supply chains to mitigate parts issues, including a promising outlook for 3D printing for on-demand parts. A tightly stretched human workforce will get some relief as Agentic Industrial AI becomes digital co-pilots in maintenance hangars. Meanwhile, reusable rockets opening the door for a new space aftermarket, expanding the very concept of MRO – The path to new markets means the sky isn’t the limit anymore! Cybersecurity grows in importance to defend globally critical infrastructure by closing the gap in the middle The entire commercial aviation network is critical, high-value infrastructure that ensures the effective movement of people and goods around the world, think transportation of vaccines. The industry’s vulnerability to cyberattacks and their ability to cause widespread disruption has been underscored by recent examples. Thales figures found a 600% increase in ransomware attacks in the aviation sector between 2024-2025. Just look to the ransomware attack in September 2025 that crippled check-in systems across multiple major European hubs such as Brussels, London and Berlin as a perfect example of how a single vendor compromise can cascade into continental-scale disruption. Any cybersecurity incidents that impact commercial aviation not only expose personal data and damage passenger trust, but they can also cripple the global supply chain. At issue is the fact that aviation is still only partially digitally mature. While only partially true, older mainframe systems are often seen as impervious to cyberattack the better modern systems are built for security. The true vulnerability lies in the “middle section” where airline, aircraft, and ground systems have been partially modernized but are not fully up to date with modern cybersecurity practices. In the year ahead, airlines and regulatory bodies, motivated by recent attacks and the essential role of aviation, and consequential potential targeting by state-sanctioned actors, will mandate a significant push for digital modernization across the entire industry. This will compel all major airlines and airports to implement up-to-date, modern cybersecurity practices for all operational systems, closing the “middle section” gap to counter potential threats. This is where airline operators need seamless agility and resilience to stand a chance in the cybersecurity battle. Any software provider to airlines and MROs must constantly adopt a clear security posture, constantly addressing vulnerabilities with frequent updates using an evergreen approach and ideally, designing out vulnerabilities from the beginning. 3D printing provides becomes part of the answer to global supply chain constraints Supply chain challenges for spare parts availability persist in commercial aviation driving it back up to the top of the list of issues facing the aviation maintenance industry and leading airlines and air operators to think outside the box and adopt innovative strategies to maintain operational readiness. One potential solution has been to use Parts Manufacturer Approval (PMA) parts, but some airlines face considerable hurdles here as lessors often refuse to allow OMA parts on their aircraft. Even if used as a stop-gap, airlines are forced to swap them out at time of lease return, meaning they are still subject to the main suppliers’ limitations. However, other parts supply solutions are on the horizon. There are promising signs ahead of ongoing efforts by FAA and EASA regulators to clarify how 3D printed parts can be used in certain applications. Additive manufacturing, combined with the digital thread, could help solve supply chain bottlenecks by allowing parts to be produced quickly and in proximity to where they are needed. In particular, this technology offers a solution for maintaining older aircraft more efficiently, as digital files for specific parts replace the need to store molds and retool assembly lines that may have been decommissioned years before. Following a formal loosening of regulatory constraints, 3D-printed parts will become a mainstream, more accepted solution. The ability to rapidly produce both non-critical and older aircraft components will drastically streamline MRO processes and establish 3D printing as a driver of supply chain resilience in an industry that continues to feel the pain of supply chain issues. We are already seeing this shift with certified 3D-printed engine components and heat exchangers that handle super-complex geometries not achievable through traditional manufacturing, such as those on the GE Catalyst turboprop engine and the 3-D printed air-to-air heat exchanger flying on the Cessna Denali. Industrial AI and digital co-pilots in maintenance hangars will revolutionize maintenance troubleshooting If there is one thing that rivals supply chain challenges for the top of the issues list in aviation maintenance, it is the skilled workforce shortage. And it’s abundantly clear that technician shortages will not be solved in the next 12 months. Despite technician certifications rising, The Pipeline Report from the U.S. Aviation Technician Education Council (ATEC) and Oliver Wyman shows increasing demand and projected retirements are expected to leave commercial aviation with 10% fewer certified mechanics than needed in 2025. So, the question becomes, how can we help the technicians we do have, do more? One answer is to digitally augment the maintenance technicians to improve overall efficiency. This is where applications of Agentic AI are stepping up to the plate. One of the most impactful applications of this AI will be the creation of a “troubleshooting agent” to support maintenance technicians. This generative AI co-pilot will be able to navigate the extraordinary complexity of maintenance documentation, such as Airworthiness Directives (ADs) and Service Bulletins (SBs). The ideal agent will be able help navigate complex reference documentation like AMMs, CMMs, troubleshooting manuals, or the IPC while pulling up pertinent SBs or ADs. The co-pilot could suggest it’s a potential recurring fault and surface which repairs failed to work previously. Such a co-pilot could in another scenario suggest the likely candidates for troubleshooting tasks including historic success rates and time to execute. It could even request the required parts automatically, so they are there waiting. In the year ahead, expect troubleshooting agents to move out of the pilot phase and into deployment within the maintenance operations of airlines and MROs. These agents will serve as a digital co-pilot that enhances the productivity of the existing, experienced workforce, while also helping close the knowledge gap for newer technicians. The dawn of the space aftermarket! Looking further skyward, an aftermarket opportunity is emerging that goes beyond Earth’s stratosphere. The new aftermarket is being driven by a proliferation of satellites that have been deployed for communication, observation and scientific purposes, combined with the rise of reusable vertical-landing rockets such as the SpaceX Falcon 9 and the newly developing Starship. Commercial space tourism is now adding a third catalyst, with reusable spaceflight vehicles that must be maintained to rigorous safety and compliance standards between flights. Together, these shifts are creating an entirely new MRO market for launch platforms themselves, which now require a formal sustainment process rather than simple disposal after a single use. These launch vehicles are increasingly designed for reusability, which means they now require a formal sustainment process rather than simple disposal after a single use. This creates a new MRO market for launch platforms themselves. For the most part, orbital vehicles have been treated as disposable assets with a finite operational life. Bringing spacecraft back down to Earth has not been feasible, and sending repair systems up has been equally impractical. The advent of self-healing materials is beginning to shift this paradigm by enabling spacecraft to autonomously repair micro-cracks and structural degradation in orbit, as demonstrated in recent aerospace research on self-healing composites. At the same time, dramatically lower launch costs mean that on-orbit servicing and repair are becoming feasible for the first time. Launch and space-platform MRO is rapidly emerging as the next frontier. Blue Origin’s multi-use Blue Ring platform illustrates how reusable vehicles will create entirely new sustainment markets. In parallel, NASA’s On-Orbit Servicing, Assembly and Manufacturing (ISAM) framework highlights how satellites and launch systems will require formal sustainment infrastructures rather than being treated as disposable. Research shows the Space Logistics Market Size will grow to $19.8 billion by 2040, with large growth driven by on-orbit servicing, assembly and manufacturing, as well as last-mile logistics. The ripple effect over the coming years is that these once disposable space assets will require sustainment and support strategies to maximize availability, efficiency, and further reduce the costs of space operations. This means maintenance needs to be built into the asset management lifecycle. No matter the form the servicing takes, this shift means that new systems will need to be implemented to manage the ongoing lifecycle management of these assets not previously required. Manufacturers must make sure vehicles are ready not just for use, but for re-use and critically, are 100% operational when they are required. The modernization mandate to chart a course for commercial aviation success The outlook for commercial aviation in 2026 is clear: digital resilience isn’t a buzzword, it’s a key path forward. Facing critical cybersecurity threats and persistent supply chain bottlenecks, the industry is accelerating its digitalization out of necessity, not choice. Securing a vulnerable digital infrastructure is essential for out of the box approaches to parts shortages such as 3D printing to reach their full potential, giving airlines the agile power to create parts on demand. Hand in hand – the journey goes up, up and away! An even more exciting shift is human-tech collaboration taking flight. With technician shortages here to stay, Agentic AI will emerge as a digital co-pilot, boosting efficiency in maintenance hangars by instantly mastering complex technical data. Looking even higher, the growth in commercial space continues to open new opportunities for aerospace companies. Author: Rob Mather, Vice President, Aerospace and Defense Industries As Vice President, Aerospace and Defense Industries, Rob Mather is responsible for leading the charge on IFS’ global A&D industry marketing strategy, while also supporting product development, sales and partner ecosystem growth. Rob has over 15 years’ experience in the A&D sector, starting out in the field and having held a number of strategic R&D, Presales and Consulting positions at IFS, Mxi Technologies and Fugro Aviation. Prior to his current position, Rob was instrumental in building and leading the global A&D Presales Solution Architecture team at IFS, playing a key role in a number of customer success engagements at some of the top names in commercial aviation and defense. He holds a degree in Aerospace Engineering from Carleton University in Ottawa, Canada, where he currently resides.
Reaching-for-the-Sky-and-Beyond-The-New-Digital-Mandate-for-Commercial-Aerospace-in-2026-

Orchestration and automation of data migration in D365 F&O – The key to a smooth transformation

Data migration is always a moment of truth in any ERP implementation. The quality of migration determines not only the accuracy of information in the new system but, above all, a smooth go-live. In projects overloaded with manual tasks, the risk of errors and delays increases with every migration cycle. That’s why companies are increasingly focusing on orchestration and automation – a strategy that streamlines the process and elevates it to a much higher level of efficiency. At 7F Technology Partners, we see how clients who have organized and automated their migration path to D365 F&O gain predictability and full control over their data. This is an investment that pays off not only during the initial launch but also in subsequent iterations and system stabilization. Order in the process Orchestrating data migration in D365 F&O means centrally managing the entire sequence of activities – from extracting information from the legacy system, through transformation, to import. Maintaining the correct sequence is critical, especially for dependent entities. A typical example is importing customers only after countries and addresses have been loaded. Dynamics 365 provides native tools for this: Data Management Framework, Data Projects, and Data Jobs. These tools not only help organize the process but also enable building a predictable and consistent migration path that can be reliably reproduced across different environments. Everyday automation The next step is automation, which frees project teams from repetitive tasks. By leveraging Azure DevOps, Power Automate, or PowerShell scripts, it’s possible to run complete migration cycles in the background, on a schedule – even overnight. After each run, teams receive reports and alerts about any errors, allowing them to focus on analyzing results rather than manually triggering imports. This approach accelerates every subsequent project sprint and increases the overall stability of the process. SharePoint as a flexible data source More and more organizations are using shared repositories like SharePoint to store migration files. Thanks to integration with Power Automate, data can be retrieved and imported into D365 F&O without manual file transfers. This gives companies full version control, easier management of updates, and a transparent data approval process. In practice, this leads to greater consistency between teams and better control over migration progress – especially in distributed environments. Business impact Orchestration and automation of migration translate into tangible business benefits. ERP implementation time is shortened because the process can be repeated more frequently and quickly. The number of errors from manual work decreases. Every stage is transparent and documented, making audits easier and reducing the risk of bottlenecks. Across the entire project, this approach means better data quality, greater stability, and less strain on key teams. The standard for modern implementations For organizations investing in digital transformation, orchestrating and automating migration to D365 F&O is becoming the natural standard. It enables implementations to be repeatable, predictable, and aligned with best practices. At 7F Technology Partners, we work with clients in exactly this way – combining orchestration, automation, and integration with SharePoint and Power Automate. As a result, the migration process becomes not only faster but, above all, more stable and better controlled. This is the foundation on which further digitalization of operations can be safely built.
Featured image for 'Orchestration and automation of data migration in D365 F&O – The key to a smooth transformation'

Tools and best project practices in the context of Dynamics 365 Finance & Operations implementations

Implementing an ERP system is always a significant challenge. In Dynamics 365 Finance & Operations projects, success depends not only on technology, but also on methodology, teamwork, and the conscious use of project tools. These tools help organize activities, accelerate key phases, and minimize risks that naturally arise during complex transformations. From the perspective of 7F Technology Partners, we see that a well-planned project helps avoid unnecessary delays, configuration errors, and adoption issues. It is therefore worth ensuring a solid foundation before the implementation enters its operational phase. LCS as the project hub Lifecycle Services (LCS) plays a crucial role in D365 F&O projects. It is a platform that supports the entire implementation lifecycle: from environment management and code migration planning to performance analysis and monitoring compliance with Microsoft best practices. As a result, the team’s work proceeds in an organized and predictable manner. BPM and the quality of analysis Business Process Modeler (BPM) helps visualize business processes and link them to D365 F&O functionalities. During the analysis and testing phases, this tool is particularly important, as it ensures that real processes are mapped – not just theoretical assumptions. A well-prepared process map makes subsequent testing easier and shortens decision-making time. DevOps, DMF, and RSAT in daily operations In the operational part of the project, Azure DevOps comes to the forefront, streamlining task management, backlog, and version control. Simultaneously, the Data Management Framework (DMF) (we wrote about it here) is responsible for data migration and synchronization between environments, while RSAT automates regression testing, reducing the cost of maintaining solution quality. This toolkit increases project control and improves system stability. Automating data migration and transformation Power Automate accelerates data loading processes and supports synchronization between D365 F&O and other systems. Meanwhile, Power Query and VBA macros streamline data transformation before import, making file preparation faster and more repeatable. In projects where data is one of the biggest challenges, these tools significantly ease the team’s workload. Best practices that make a difference Effective ERP implementation is largely a matter of organization and communication. Here are a few principles that work well in D365 F&O projects: Define a clear structure of roles and responsibilities – both on the client and implementation partner sides. Favor an iterative work model – short implementation cycles (agile/hybrid agile) allow for faster verification of results and quicker response to changes. Document and validate business processes – use BPM and test scenarios to ensure the system reflects the organization’s real needs. Manage data and environment versions – an organized approach to migration and testing minimizes the risk of data loss. Don’t skip training and UAT phases – involving end users in testing and acceptance is key to a successful implementation. The project as an investment Implementing D365 F&O is not just about configuring a new system. It is a broad organizational transformation whose effectiveness depends on the conscious use of tools and consistent adherence to best practices. A well-managed project provides the company with transparency, risk control, and a foundation for further digitalization. Approached in this way, it becomes an investment, not a cost.
Featured image for 'Tools and best project practices in the context of Dynamics 365 Finance & Operations implementations'

Data Management Framework in practice – How to manage data effectively in D365 F&O

In the world of modern ERP systems, data forms the foundation of business decision-making. The quality and consistency of data determine whether financial, logistics, or sales processes will run smoothly and predictably. In Dynamics 365 Finance & Operations, the central tool supporting this area is the Data Management Framework (DMF). From the perspective of 7F Technology Partners, DMF is one of those elements that truly demonstrates how working with data can streamline the daily operations of an organization. However, to fully leverage its potential, it’s important to understand how it works on an operational level and which habits most effectively boost its performance. Entities as a Common Data Language DMF is based on Data Entities, which represent specific business objects such as customers, vendors, products, or orders. This means users don’t need to know the database structure – they work with concepts familiar from business processes. The system provides hundreds of ready-to-use entities, but also allows you to create your own, offering great flexibility and enabling you to tailor the tool to your company’s needs. In practice, this means that even complex data models can be managed in a unified and repeatable way. Data Projects in everyday use At the heart of working with DMF are Data Projects, which define the process of importing or exporting data. Within a project, you specify which entity you’re using, the source file, and the format – Excel, CSV, XML, or ZIP. The import process follows a consistent pattern: select the entity, map columns, validate, process, and analyze results. Export works similarly, except you indicate which data to extract and the destination, such as Azure Blob Storage, SharePoint, or a downloadable file. This mechanism enables the creation of repeatable, predictable processes and relieves teams that rely on up-to-date operational data. Good data preparation Most challenges during import stem from the quality of input data. Incorrect formats, missing values, or business inconsistencies can block the entire process. That’s why a key aspect of working with DMF is proper file preparation – with the right structures, units, and required fields. In practice, this means not only improving files but also building awareness among teams responsible for data that its quality directly impacts the success of operations in the system. Templates that organize work Data Templates are a tool that significantly streamlines migrations and repeat imports. Templates group entities into logical sets – for example, for finance or warehouse areas – making it easier to maintain consistency between project stages and across environments. It’s a simple way to standardize data work, especially appreciated by teams involved in rollouts or system maintenance. Control and error handling DMF provides mechanisms for monitoring task statuses and logs that allow quick identification of errors. Information on the number of processed records, execution time, and reasons for failures gives a transparent view of what happened during import or export. In practice, this tool not only helps detect errors but also improves the entire process – especially when operations are frequent and involve large data volumes. Automation means fewer manual operations Recurring Data Jobs allow you to schedule cyclic tasks, such as daily exports to external systems or regular updates of reference data. Automation reduces manual operations and the risk of mistakes, while ensuring data remains current across the application ecosystem. In conclusion The Data Management Framework is not just a technical tool – it’s a key element of data management in D365 F&O. It enables repeatable migrations, supports integrations, and improves data quality within the organization. A well-configured DMF becomes a solid foundation for further digital transformation and efficient use of data in daily business operations. Now is a good time to consider how organized and predictable your data processes are – because that’s where the real effectiveness of DMF begins.
Featured image for 'Data Management Framework in practice – How to manage data effectively in D365 F&O'

AI That Puts Money Back Into Companies. A Conversation With Matt Kempson, COO AI at IFS, During Industrial X Unleashed

When I sat down with Matt Kempson, COO AI at IFS, during IFS Industrial AI X Unleashed, I immediately felt that I was about to hear something more than corporate slogans about artificial intelligence. Matt talks about AI in a deeply practical, almost operational way – always from the perspective of real problems that can be solved here and now. And indeed, this conversation turned out to be one of the most concrete I have conducted in recent years. Industrial X Unleashed – where AI met real industry Industrial X Unleashed was a one-day event held by IFS on November 13, bringing together industry leaders, customers, and experts working at the intersection of industrial operations and artificial intelligence. The conference did not focus on futuristic visions, but on practical applications of AI – the kind that are already transforming how manufacturing, service, energy, and logistics companies operate. Throughout this intensive day, IFS presented both its current AI strategy and concrete, fully functioning solutions used by customers today. Discussions focused on intelligent inventory management, digital workers supporting field technicians, supply chain automation, and tools that reduce diagnostic and repair times. It was also an excellent opportunity to talk about the challenges faced by companies in Poland and around the world – from the shortage of skilled engineers to pressure to reduce operational costs and the growing need to automate processes. It was in this context that my conversation with Matt Kempson, COO AI at IFS, took place. The biggest opportunity for AI in industry – where companies could gain the most today? When I opened the conversation by asking Matt about the biggest opportunity for AI across the industries IFS serves, I expected to hear about predictive maintenance, process automation, or intelligent planning. Instead, Matt began with a topic that is painfully down-to-earth – and financially enormous: inventory. And not “inventory” in the sense of having a slightly better-stocked warehouse, but in the sense of millions in frozen capital that companies often don’t even know about. Matt stated it clearly: Block Quote Listening to him, I immediately thought of many Polish companies investing in machines, automation, and ERP systems – while inventory quietly drains their cash flow in the background. Then, he took it further: Block Quote It was at this point that I fully understood what Matt meant when he described inventory as “the most underestimated domain” in industrial AI. Not advanced predictive models, not robotics – but real-time visibility, normalization, and understanding of whata company actually owns. Matt also emphasized a crucial point: inventory is the area where AI delivers the fastest ROI. Often from day one. And maybe, as Matt implicitly suggested, the perfect AI use case many companies are searching for… is already lying on a shelf in their warehouse. How companies accelerated AI adoption – what separated leaders from those stuck in pilots? My second question to Matt addressed a problem I see constantly in Poland: companies begin AI initiatives, run pilots… and stay stuck at that stage. Matt immediately acknowledged that this wasn’t just a local issue: Block Quote He explained that AI adoption always began with people – and that organizations could not bypass this phase: Block Quote He highlighted a step that most companies overlook: Block Quote Only then, Matt said, should companies move toward quick wins using ready-made solutions: Block Quote The most powerful statement came when he explained why companies get stuck: Block Quote Then came a warning every decision-maker should remember: Block Quote This part of the conversation showed me clearly that AI leaders weren’t distinguished by the tools they used – but by the pace and order in which they adopted them. What could IFS change in the coming year – the three AI developments Matt was most excited about When I asked Matt which upcoming AI innovations at IFS excited him the most,he didn’t hesitate: “Let me give you three.” IFS.AI Logistics – a true “game changer” for supply chain The first area was the upcoming relaunch of IFS.AI Logistics, enhanced by the acquisition of Seven Bridges. Block Quote Digital workers – more work done, less paperwork Second were the digital worker capabilities demonstrated during IFS Loops: Block Quote IFS Nexus Black – a true inventory revolution in just six weeks The final area was clearly the one closest to Matt’s heart: inventory. Block Quote And then he delivered the sentence that shows just how transformative AI has become: Block Quote At that moment, it became clear that we weren’t talking about the future – but about solutions ready to reshape operations right now. Why AI has become a strategic tool for industry – summary of my conversation with Matt Kempson As we wrapped up, I asked Matt about the broader meaning of AI for companies – beyond individual use cases. His answer captured perfectly the essence of our discussion. He stressed that companies often hunt for savings at the end of the year in the worst possible places – by cutting staff or squeezing suppliers. Meanwhile, the real opportunity lies elsewhere: Block Quote Then came the statement that stayed with me long after our conversation: Block Quote This made me realize that our discussion was not about the future – but about very real, very urgent decisions companies can make right now. About the money they lose every day by waiting. And about the competitive edge earned by those who don’t delay. That’s why this conversation with Matt was one of the most eye-opening and concrete discussions I’ve had this year.
Featured image for 'AI That Puts Money Back Into Companies. A Conversation With Matt Kempson, COO AI at IFS, During Industrial X Unleashed'

9 signs that D365 implementation was done poorly

Implementation of the Microsoft Dynamics 365 ERP system is a project that involves every department in the company and requires many months of intensive work. Sometimes it’s not immediately obvious that something is wrong. Below we present 9 signs that something may have gone off track and is worth fixing. What should you look at to assess what went wrong? Hypercare never cools down A long period of “firefighting” after go-live, a flood of recurring issues, lack of stability, and the feeling of being “stuck in tickets.” This is a classic sign that implementation errors are taking their toll after launch. Low adoption and workarounds outside the system Key steps are handled in Excel, “shortcuts,” or auxiliary systems, and decisions are not based on a single, centralized source of truth. Polish Localization isn’t complete Omitted legal and tax requirements (e.g., E-Invoicing, SAF-T, split payment, white list) lead to delayed go-live, document recording issues, and user resistance. Additional red flags include the lack of automatic NBP exchange rate handling with delay and taxpayer validation. Poor data migration and lack of validation Inconsistent master data, errors in the opening balance, and issues with settlements after data transfer – all result from insufficient migration testing and data quality checks. Testing features, not processes Testing is limited to individual screens instead of end-to-end scenarios, with no integration, regression, or performance tests; users are brought into UAT too late. Customizations instead of configuration A high number of code modifications, no extension guidelines, and no update maintenance plan – this is a recipe for a fragile system and costly upgrades. Disorganized architecture and integrations No approved application blueprint, unclear role of D365 in the ecosystem, inconsistent or duplicate integrations and data flows. Unclear roles, weak training, and change management Missing roles like Solution Architect, unclear division of responsibilities, and minimal training or instructions – users don’t “get” the solution even from the testing phase. Chaos in environments and licensing Poorly managed environments (Dev/Test/UAT/Prod), no refresh/sync procedures, plus suboptimal licenses and user roles – this signals high maintenance costs and team workflow bottlenecks.
Featured image for '9 signs that D365 implementation was done poorly'
You've seen 10 of 47 articles
Show more