This paper describes the definition of a typical next-generation space-based weak gravitational lensing experiment. We first adopt a set of top-level science requirements from the literature, based on the scale and depth of the galaxy sample, and the avoidance of systematic effects in the measurements which would bias the derived shear values. We then identify and categorise the contributing factors to the systematic effects, combining them with the correct weighting, in such a way as to fit within the top-level requirements. We present techniques which permit the performance to be evaluated and explore the limits at which the contributing factors can be managed. Besides the modelling biases resulting from the use of weighted moments, the main contributing factors are the reconstruction of the instrument point spread function (PSF), which is derived from the stellar images on the image, and the correction of the charge transfer inefficiency (CTI) in the CCD detectors caused by radiation damage.
If instrumentation is stable and well calibrated, we find extant shear measurement software from Gravitational Lensing Accuracy Testing 2010 (GREAT10) already meet requirements on galaxies detected at signal-to-noise ratio = 40. Averaging over a population of galaxies with a realistic distribution of sizes, it also meets requirements for a 2D cosmic shear analysis from space. If used on fainter galaxies or for 3D cosmic shear tomography, existing algorithms would need calibration on simulations to avoid introducing bias at a level similar to the statistical error. Requirements on hardware and calibration data are discussed in more detail in a companion paper. Our analysis is intentionally general, but is specifically being used to drive the hardware and ground segment performance budget for the design of the European Space Agency's recently selected Euclid mission.