Veeva [NYSE: VEEV] is the leader in cloud-based software for the global life sciences industry. Committed to innovation, product excellence, and customer success, our customers range from the worlds largest pharmaceutical companies to emerging biotechs. Veevas software helps our customers bring medicines and therapies to patients faster.
We are the first public company to become a Public Benefit Corporation. As a PBC, we are committed to making the industries we serve more productive, and we are committed to creating high-quality employment opportunities.
Veeva is a Work Anywhere company which means that you can choose to work in the environment that works best for you - on any given day. Whether you choose to work remotely from home or work in an office - its up to you.
Veeva is looking for a Data Architect to help build the IT organizations data strategy in support of our business customers, as well as internal initiatives. If you are a Data Person, who knows how to build data pipelines, aggregate data from multiple sources into a common data store, understands data governance and security, and have experience with industry-leading cloud data platforms and scripting, wed love to talk with you.
The Data Architect will work closely with Veevas Enterprise Architect, Integration Engineers, Business Operations customers, and Services Partners to help advance our data and analytics ambitions.
This role will serve as a hands-on contributor and be a thought leader in the areas of data engineering, cloud data strategy, Business Intelligence, Data Modeling and ETL/ELT.
What You'll Do
- Participates in the discovery to analyze the existing implementation of data flows, data stores, data models, and data elements to understand challenges and define a high-level future state for advanced analytics
- Build, test, and monitor optimal data pipeline architecture, especially around IT and internal business operations systems
- Blend methodologies from machine learning and operations research to aggregate complex and disparate data sources into value-added information streams accessible by non-technical staff
- Define and build the architecture required for optimal extraction, transformation, and loading of data from a wide variety of data sources using technology such as cloud ELT, SQL, JDBC, and AWS big data technologies
- Build and maintain a robust model for cloud data governance, data security, and a Data Dictionary/catalog, including sources of authority
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build and scale systems to retrieve, process and make available internal and external datasets
- Bachelor's degree in Computer Science, Information Systems, Engineering, Data Science, or other similarly technical related field Mathematics, or specialized training/certification or equivalent work experience
- 5+ years of professional experience in cloud data engineering and science and associated technologies utilizing cloud data platforms such as Databricks, AWS RedShift, and Snowflake (including relational, document, key/value, graph, and object stores)
- 5+ years of experience designing and implementing event-based stream processing solutions using technologies such as Kafka, Kinesis, and Flink
- 5+ years of experience designing and implementing data management solutions that enable Data Quality, Master and Reference Data Management, and Metadata Management
- 3+ years of experience with integration tools and API-led connectivity
- Experience with implementing Data Governance principles and processes, data and network security standards and practices, as well as data catalog definition.
- Proficiency in using visualization tools such as Tableau, Domo, or Power BI
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
- Strong verbal, written, and presentation skills with the ability to effectively communicate complex technical information to personnel at all levels of the organization
Nice to Have
- Specific experience with Data Warehouse/Data Lake configuration and development using Databricks platform
- Experience using Alteryx
- Experience with Tableau
- Experience operating in an Agile development environment
- Familiarity with the usage of Agile tools (JIRA / Confluence)
- Understanding of CI/CD deployment models and release strategy as well as SCM tools (Git preferred) and code management best practices
- Experience in AWS environment
- Experience with cloud ELT platforms such as AWS Glue, Talend Stitch, or FiveTran
Veevas headquarters is located in the San Francisco Bay Area with offices in more than 15 countries around the world.
Veeva is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity or expression, religion, national origin or ancestry, age, disability, marital status, pregnancy, protected veteran status, protected genetic information, political affiliation, or any other characteristics protected by local laws, regulations, or ordinances. If you need assistance or accommodation due to a disability or special need when applying for a role or in our recruitment process, please contact us at [email protected].