Skip to main content

Our expertise is design and evaluation of clinical trials, post-marketing studies and epidemiological surveys.

We have the appropriate knowledge, analytical excellence and solutions in clinical trials management and outcomes researches. Our focus is always on our client’s needs and goals.
Clinical trials

Clinical trials

Importance of Design Selection

There are tons of accepted and frequently applied study designs like parallel or cross-over, open or blinded, prospective, retrospective or case-study designs, etc. Each desgin provides a flexibility and sets up some limitations at the same time. The research/medical questions can answer which limitations are acceptable and what level of flexibility is required. The study statistician is able to form this picture with transforming the research questions into strict mathematical/statistical framework, An easy example to illustrate this creative process: the efficacy of a blood pressure decreasing intervention (let’s say, a drug) can be measured on a continuous measure (SBP/DBP decrease) and on a frequency distribution (normotensive vs. abonormal BP). In a perfect study design both opportunities are examined and the best fit to the specific study drug will be selected.

Transformation of the research questions into statistical statements, selection of the optimal design can determine the outcome of a study.

Classical Designs

There are a numerous “classical” designs. They are well-known and the associated sample size determination (design and sample size determination are close relatives) is generally built in softwares like SAS, R or PASS. The ideal study design generally can be chosen from these traditional ones. But they also have some disadvantages compared to the so called modern designs: pooling of different phases is not really reported, flexibility in changing the number of treatment arms practically does not excist.

Modern Designs

The term of modern designs is not an exact definition. Generally those desings are considered here which somehow extend the limitations of “classical (frequentist) designs”. The modern, sometimes labelled as Bayesian designs generally are more cost-effective compared to classical designs, as they allow to evaluate the outcomes after each patient and also allow the stopping of the study (with a confirmed positive outcome) before inclusion of the planned number of patients. But the price should be paid here, too.

Sample Size Determination

What’s the importance of this? Sample size determination in clinical trials are required due to two reasons. One is ethical. As the name says, this is – designed by scientist, approved by authorities – “trial”. There are risks and we all would like to minimize the risk of causing any harm to any volunteer or patient. The real question is: how can we minimize the risk with trying to reach the scitific goals? This is what sample size determination talks about. The calculations provice the reuqired sample size to confirm the research hypothesis (more preciously to decline the opposite one but this is a different story). Another view: we measure the “success” of a study – gennerally – in p-values. We say that we are quite certain in the efficacy of a treatment if the statement of “the treatment is not efficient enough” can be declined with a 95% probability. Due to the nature of the background mathematics, the probability of decline increases with the increase of the sample size. To simplify, you can prove any statement with increasing the sample size to infinity. Each design and eache background statistical tool has a sample size formula which is the minimally required to prove that specific statistical assumpation – if that is really valid. To avoid any potential bias – or freud – the clinical trials should be based on the calculated sample sizes.

Sample size determination can be performed with the help of pre-built functions of SAS or R. It also can be based on simulation in more complex designs.

Sample size in closed formula

The statistical significance – generally – is defined as an outcome of a statistical test. A statistical test is not more or less than a formula of certain input data. If a statistical test can be derived by a closed formula, than the required sample size can also be determined as output of a mathematical function. These processes are well programmed and documented in SAS, R or PASS.

Sample size by simulation

There are more complex designs, where the statistical evalution does not depend on a lonely statistical test, but rather a process of statistical testing. In those cases the required sample size cannot be determined with the help of closed formulas: it rather requires a kind of simulation. This procedured is much more complicated but not because of more programmic efforts, but due to the increased validation procedures.

Clinical Data Interchange Standards Consortium (CDISC)

As the CDISC states about itself ( ):

“CDISC creates clarity in clinical research by bringing together a global community of experts to develop and advance data standards of the highest quality. Together, we enable the accessibility, interoperability, and reusability of data for more meaningful and efficient research that has greater impact on global health.”

The roots goes back to 1997 – when Planimeter was also established – when a branch of volunteers started to set-up standards for clinical study data databases, variable names, formats. The CDISC Consorcium is already part of the history of drug development.

During our evaluation, reporting activities we also apply and follow the CDISC terminologies and rules.

CDISC is a pharmaceutical standard which support all aspects of data collection, data management and reporting.


“CDASH establishes a standard way to collect data consistently across studies and sponsors so that data collection formats and structures provide clear traceability of submission data into the Study Data Tabulation Model (SDTM), delivering more transparency to regulators and others who conduct data review.”



“SDTM provides a standard for organizing and formatting data to streamline processes in collection, management, analysis and reporting. Implementing SDTM supports data aggregation and warehousing; fosters mining and reuse; facilitates sharing; helps perform due diligence and other important data review activities; and improves the regulatory review and approval process. SDTM is also used in non-clinical data (SEND), medical devices and pharmacogenomics/genetics studies.

SDTM is one of the required standards for data submission to FDA (U.S.) and PMDA (Japan).

Details on the requirements for FDA are specified in the FDA’s Data Standards Catalog for NDA, ANDA, and certain BLA submissions. For more information, please visit the FDA Guidance on Standardized Data.

Details on the requirements for PMDA can be found on the Advanced Review with Electronic Data Promotion Group page.”



ADaM defines dataset and metadata standards that support:

  • efficient generation, replication, and review of clinical trial statistical analyses, and
  • traceability among analysis results, analysis data, and data represented in the Study Data Tabulation Model (SDTM).​

ADaM is one of the required standards for data submission to FDA (U.S.) and PMDA (Japan).

Details on the requirements for FDA are specified in the FDA’s Data Standards Catalog for NDA, ANDA, and certain BLA submissions. For more information, please visit the FDA Guidance on Standardized Data.

Details on the requirements for PMDA can be found on the Advanced Review with Electronic Data Promotion Group page.


Data Management

Data Management is a very important part of clinical trials. Generally it is under evaluated as there is no visible impact on the outcome. But weakness of data management leads to weakness of analysis and reporting. There are several steps of data management. Basically paper and electronic CRF (case report case) collection can be considered as different approaches. See the details below. Data Management has its own standard activiteis independetly of data collection. It’s very impoartant that the data integratity should be sustained. The data cleaning process should be also supported.

Data Management can cover data collection, querying process and database cleaning, coding or integration of external data sources.

Paper CRF

While eCRF has unquestionable advantages, paper based CRFs still have relevance especially in studies of subject number < 100. Manual data entry is more cost effective in this scenario. There are only few companies left today who provide manual data entry. Planimeter is still one of them.

Even if we are talking manual data entry, it is supported internally with out eCRF system, see more details on the right side.

Data management is not restricted to data entry or data collection. Query management – which is almost completely automated in case of eCRF – is also an integrant part of data management activities. When data database is almost clean, coding according to different internal coding standards should be performed.

Data is to be analysed. To do that the quantity and quality of protocol violations should be classified to set the analysis database flags. Finally everything should be documented, communinated with the Sponsor and filed according to the sector standards.

Electronic Case Report Form (eCRF/EDC)

We started to develop our first eCRF system 15 years ago. Since then the solution has became a fully CRF 11 Part 21 compliant, widely knonwn and acknowledged web based application.

Some features:

  • Integration of a new study is extremely fast as we generate the CRFs with the help of an eCRF generation motor
  • Strict process control: data are not only collected and stored but the processes are also managed with the help of automated notifications
  • Easy connection to external data sources
  • Pre-build integration with R or R/Shiny making any calculation easy or setting up a flexible dashboard
  • Online systems have to face with many threads. Several certificates prove that we are continuously working on the improvement of the safety and data privacy.

Documents of Data Management and Statistical Activities

Design, conduction, evaluation and reporting of clinical trials are encompassed by different documentation reqirements. Some types of documents support of planning and conduction of the trial. E.g. a Data Management Plan tells the player how to manage the data (when they will be available). Or a Statistical Analysis Plan tells how to analyse and report the collected information.

Other documents are show the outcome of the study. The tables, listing and graphs are the direct outputs of statistical evaluation. Statistical Report is a structured and commented selection of the main results.

Between the plans and outcomes there are documents which support the working processes (delivery logs, checklists), document the activities (programming logs), serve the quality assurance (review documents) and preserve the anomalies experienced during the different steps (not-to-files).

Protocol, CRF and other accompanying documentation preparation is an integrant part of DM and statistical support of clinical trials.

Data management

The primary document of the data management is the Data Management Plan (DMP). It shows all important aspect of data collection, storage or handling of exceptions. Sometimes a so called Data Validation Plan is also created. This document is about the processes to assure that the data in the clinical trial database surely be identical with the healtcare data.

Query management is an important part of data managements. The first step to that is to set up the check list, that is the list of syntactical or semantical rules to be checked among the data. Majority of such checks can be integrated into the eCRF for data collection, but a process for manual querying should also be established.

While all the data corrections should be performed by Investigators there might be automatic, self-evident corrections. The best example for that is to assure identical form of month (01/Jan/January/1 = January, etc.).

Coding is done by ATC or MedDRA dictionaries for the drugs and Adverse Events respectively.

Specification of Protocol Deviations summarises and classifies the observed protocol deviation. This document and especially the classification of the events into minor and major has a decisive role in setting the analysis population flags.

Statical programing

This activity practically means the programming, the evaluation of the study. The work is driven by the SAP and Table Shell and supported by some specific SOPs, like application of programmic conventions in the program code development.

As the activity is programming which can only be understood by programmers, it is very important to declare the quality assurance with introduction of several reviewers to follow and control the programming activities. The most important documents of this phase are the quality assurance documents which accompany the whole code development.

Some unexpected situation can emerge during evaluation leading to a modification of the SAP or the Table Shell. Such actions should also be documented appropriately.

Statistical activities

Statistical activities – not considering the statistical programming itself – can be summarised in three main documents:

  • Statistical Analysis Plan (SAP)
    • SAP contains all the relevant information to perform the statistical analysis. Namely, it very clearly associate statistical methods to analysis of primary and secondary endpoints, demographic data, laboratory observations, adverse events, etc. SAP also determines the criteria of testing procedures and the significance levels. The writers of SAPs are also SAS programmers and as such they have the responsibility to give guidance for a proper parametrisation of SAS functions. Alternative approaches of analysis should also be mentioned in the SAP: we all know that the classical ANOVA can only be done if the standard deviations of the groups are close to each other. Otherwise some non-parametric test should be applied. These choices are clearly highlighted in SAPs.
  • Table shells
    • Table shell is the empty report containing tables, listings and graphs. Each letter (and each space) has its own importance in the reporting. The table shell for tables and listing contains all the headers, footers, titles, numbering and even some rules for the content, e.g. the number of decimal places or indication of missing values. Footnotes are also very important in interpretation of the results, they are also planned and contained in table shells.
  • Statistical Report
    • Statistical report is a commented summary of the main results. A more deteiled description can be found here.


The outcome of reporting is the Statistical Report. This activity and document is described in more details here.

Generation of Tables, Listings and Graphs

The data are collected in clinical trials are tabulated in a very formal way. The required statistical activites are summarised in the Statistical Analysis Plan (SAP), while the associated Table Sheel will say the exact format of the outputs. Basically there are three types of the outputs:

  • Tables translate the majority of the obtained information. Tables generally represent several dimensions (frequently more than two) amd are capable to show information in a structured manner (e.g. mean values of laboratory data before and after treatment).
  • Graphs are generally introduced to support the interpretation of the primary, some secondary or safety results. A good example of survival type analyses, where a graph with Kaplan-Meier estimation really says more than 100 words.
  • Listings provide transparency: they make possible a potential recheck of the results and to identifdy and explain some data issues.
Preparation of Tables, Listings and Graphs is supported by the so called statistical programming, but specification and quality control are also solid tasks here.

Statistical Programming

This is a very direct sector-dependent terminology. It suggest that is has some conenction to statistics and programming at the same time. This is true, but only partly true. Statistical programming in practice means the preparation of the outputs of the clinical trials (the so called Tables, Listings and Graphs) in majority of the cases with the help of SAS software.

The activity is based on the Protocol, Statistical Analysis Plan, the Table Shell and the associated – ICH or CDISC – guidelines. So it is quite a complex activity. Furthermore as only programmers can understand what is really coded, several programmers have to work on a project to support each other and to be able to provide an efficient quality assurance.

During statistical programming the output templates described in the table shell should be filled with data. Sometimes data has its own will not to obey to the original concept layed in the table shell. It might happen that the plans should be adjusted to the reality so it is a creative activity.

SAS as a program environment has its own rules. It is easy to add a label to a graphs. But it is difficult to write a general function (so called macto) which places the label without covering any other important information.

We at Planimeter have been using SAS Software from the very first moment, that is for 23 years (so far). We collected lots of experiences, built our own program libraries and set-up important SOPs to support both the program and review phases.

Statistical Programming means the process which finally leads to preparation of Tables, Listings and Graphs.

Statistical Reporting

In clinical trials the statistical report is the link between tables, listings and grahs and clinical study report. The latter is the final and official study closing documentation for review of Authorities. The Clinical Study Report (CSR) is created by Medical Writers and following very rules even in formatting or document structure.

The Statistical Report is generally prepared by the statistical team of a study. This summarises the whole study – including the medical background, the research goals, assessments and schedule. So far this description describes the Clinical Study Protocol. Where the statistical report is more: it also contains the main results of the study (generally 20-40% of the outputs of the statistical analysis) with a detailed explanation of the outcomes. Similarly important part of the document which shows the applied statistical methods. The analysis is always performed by the instructions of Statistical Analysis Plan (SAP), but SAP is approved before database closure, so some unexpected situations can emerge during application of SAP’s rules (e.g. unexpected distribution which makes a data transformation necessary, or a different missing value pattern from the expected one, which implies a minor change in the applied statistical tool. Reality can override the original analysis plans and it is allowed to modify them. But practically the Statistical Report is the only document where these types of changes can be accessable.

The European Medicines Agency let out two guideline. One for statistical considerations in clinical trials, the other about the formatting requirements of statistical reports. At Planimeter we follow these guidelines as far the actual Design, Protocol and SAP makes it possible.

The program outputs contain all the results, but a meaningful, well structured summary of them can mean a great help in interpretation of them.

Statistical Considerations

ICH E9 statistical principles for clinical trials

Medical writers have to follow strict rules in writing of Clinical Study Report and so do the statisticians in preparation of Statistical Reports. The European Medicines Agencies (EMA) created a guideline containing some considerations applied in clinical trials. As they state ( ) :

“This document provides guidance on the design, conduct, analysis and evaluation of clinical trials of an investigational product in the context of its overall clinical development.”

The document was originally created in September 1998 and available here:

An addendum /ICH E9 (R1) addendum on estimands and sensitivity analysis in clinical trials to the guideline on statistical principles for clinical trials/ was recently added in February 2020. That latest document can be download here: .

Formatting requirements of Statistical Reports

ICH E3 Structure and content of clinical study reports

EMA also created a guidline for the structure and content of clinical study reports in July 1996.

This document aims to allow the compilation of a single core clinical study report acceptable to all regulatory authorities of the ICH regions.

The document can be avaiable on this link: .

Quality Assurance

Pharmaceutical industry is one most demanding sector with respect of quality assurance. Maybe pharma people are not more careful with their jobs as others in other sectors (maybe they are) but what is sure: the processes of the quality controls are very sophisticated ones. To provide a service under the same – or continuously improving – conditions for a longer time of period have many aspects.

What we do at Planimeter: 

  • Choose our colleagues with care,
  • Qualify and monitor our contractors,
  • Provide a continuous training opportunity (among others in the framework of yearly training plan),
  • Going through internal and external (client) audits,
  • Apply our extended SOP system,
  • Improve the SOP systems.
Data Management, Statistical Programming, Evaluation: each step has its own specific quality control requirements.


Safety and Efficacy Study of Orally Administered DS102 in Healthy Subjects

The citations are from ( ). The study identifier is Identifier: NCT02673593.

Planimeter was in charge for the eCRF, complete data management, statistical evaluation and reporting.

A Randomised, Double-Blind, Placebo-Controlled, Single Ascending Dose and Multiple Dose Phase I Study to Assess the Safety, Pharmacokinetics and Effect of Food on Orally Administered DS102 in Healthy Subjects.

The purpose of the study is to investigate the safety, pharmacokinetics and food effect of DS102 (up to 2000mg single and multiple daily doses) and placebo in healthy participants. DS102 capsules will be orally administered for up to 4 weeks, and will be compared against placebo. The study will enrol approximately 56 adult subjects.

Details coming soon…

Details coming soon…

Details coming soon…

Details coming soon…

Details coming soon…

Details coming soon…