We are looking for an experienced Data Architect to join Hummingbird Technologies to help scale our multi-terabyte image processing pipeline for the global agricultural evolution.
You will join a talented team of software engineers, big data engineers, computer vision experts and machine learning researchers, and are expected to get up to speed rapidly in this fast-paced, multi-disciplinary environment. We are not solving trivial problems, but researching and developing to shape the future of crop and farm management through the creation of predictive analysis products, which will be used across the globe to feed the world and minimise the long-term environmental impact of modern, large-scale agriculture.
Hummingbird was founded in 2016 and is the only remote sensing business in UK agriculture to use artificial intelligence which gathers information from drone, plane and satellite technology, combined with weather and soil data and expert plant pathology in order to enable precision agriculture. We use the most advanced machine learning and computer vision techniques, delivering actionable insights on crop health directly to the field. We are Best British Tech Startup 2019 and driving the next generation of precision agriculture to feed the world in sustainable way!
Existing backers of the business include The European Space Agency, Sir James Dyson, Horizon Ventures, Downing Ventures and Velcourt, the UK’s largest commercial farming operation. It also has tech partners which include Google UK and Cranfield University.
Please note, we are unable to provide sponsorship for this role.
This will be a hands on position guiding our technical decisions when it comes to our data processing platform.
- Creating, and developing scalable, fault tolerant, self healing big data architecture
- Defining, developing, and extending processing paths for a variety of image sources (API, Multi-tiff, hyper-spectral, multispectral, RGB)
- Developing and implementing the data architecture vision alongside the Head of Engineering and CTO
- You will be the driving force of our terabyte data processing pipeline scalability, operational quality and KPIs
- You will have day to day technical leadership / mentorship within a distributed engineering team
- You will ensure optimal communication within our distributed engineering team by travelling 10-15% time within Europe
- Hands-on polyglot experience as an Engineer in Java (Spring)
- Experience with Python engineering practices
- Demonstrable experience in Big Data architecture design and implementation
- Experience of API / microservices design patterns and web technologies
- Experience with multiple Big Data / message broker tools e.g. Kafka, Flink, Akka, Hadoop/HDFS, RabbitMQ, ActiveMQ, Spark
- Production experience of AWS or GCP
- Production experience of Kubernetes or Docker Swarm
- Database selection (Document, Graph, Column, Relational), design, implementation, and data modelling.
- Knowledge of the principles governing best practice in Platform Data Architecture, Management & Governance
- Experience with agile methodology using TDD, BDD, and using Scrum and Kanban
- Technical skills in Data Science, Data Ingestion, Data Augmentation, Big Data and Cloud platforms
- Excellent problem-solving and communication skills
- MSc in Computer Science, Engineering or relevant field
- Experience in applied machine learning, computer vision and image processing
- Experience in geospatial databases, object manipulation and ETL/transformations
- Experience in electronics/embedded systems for UAV’s
- Personal interest in environmental impact and sustainability