Who we are:
Ceres is leading the way in clean energy innovation, pioneering advancements in electrolysis for green hydrogen production and fuel cells for future power solutions. With our dynamic licensing model, we've built powerful partnerships with major multinational companies like Bosch, Doosan, Shell, and Weichai and our solid oxide platform is transforming energy systems, delivering high-efficiency green hydrogen to decarbonise some of the most emissions-heavy industries including steelmaking, and future fuels.
At Ceres, we foster a workplace driven by passion and purpose. We support our team to think ambitiously, collaborate creatively and confront complex challenges directly. Innovation is at the core of who we are, and we strive to push the boundaries of what’s possible with technology.
Purpose of the role:
This role sits within the department of Modelling and Digitalisation, consisting of highly skilled and dedicated modelling and simulation engineers, data scientists, and data engineers. The department specialises in advanced multi-domain computational modelling, data analysis, and creates bespoke data products and cloud data platform solutions to support all core areas of the business, with a focus on accelerating the company’s product and technology development.
The role will be working in our dedicated data team, who develop and maintain our data Lakehouse platform, built on Azure Databricks, and our data pipelines using IOT and Databricks streaming services. You will have a responsibility to build and maintain robust, secure end-to-end data pipelines and data models from internal and remote sources to deliver trusted datasets to our end customers both within and outside of Ceres.
Key Accountabilities:
• Maintain and enhance our Azure Databricks Data Platform by working collaboratively with our dedicated data team and other stakeholders in the business.
• Build and maintain ingestion data pipelines to Delta Lake through Python coded Databricks workflows.
• Best practice configuration and support of Databricks compute services.
• Support evolution of our data platform to migrate fully into Unity Catalog and take advantage of the new capabilities.
• Build and maintain IOT data Streaming pipelines from source to Cloud.
• Build robust, automated monitoring and alerting solutions to support SLA targets and rapid responses to outages.
• Continuous review and piloting and data engineering technologies for solution improvements and best practices.
• Ensuring our data resilience and DR solutions are delivered, tested and appropriate.
• Work with all core business functions, including customer facing projects to identify and create data domains, models, and datasets that integrate with our latest data products – enabling fast data driven decisions to be made across the business.
• Where needed, lead the delivery of work-packages within cross disciplinary teams to deliver data platform solutions that meet critical business requirements.
• Contribute to the long-term strategy and the future direction of the department.
Knowledge and skills required for the role:
• University level qualification in a relevant subject is desirable (e.g. Computer Science, Data Science, Engineering)
• Several years of experience in Data engineering, using Databricks, python, SQL Server Services/T-SQL, Azure Data Services including storage and IOT services, Azure Devops or similar Agile framework tools
• Experience in working in engineering and/or manufacturing sectors is desirable
• Experience in delivering structured streaming solutions, Unity Catalog, Azure networking and security, maintenance and optimizing of TB+ delta tables is desirable
• Excellent communicator: capable of communicating and building and maintaining effective relationships at all levels
• Creative, methodical problem solver
• Fast learner of new domain knowledge, proactive, self-starter able to work with minimal instructions in a fast-paced collaborative environment, not afraid to fail and try again