World Library  
Flag as Inappropriate
Email this Article

Protein-protein docking

Article Id: WHEBN0019244851
Reproduction Date:

Title: Protein-protein docking  
Author: World Heritage Encyclopedia
Language: English
Subject: List of topics related to protein
Publisher: World Heritage Encyclopedia

Protein-protein docking

Macromolecular docking is the computational modelling of the quaternary structure of complexes formed by two or more interacting biological macromolecules. Protein–protein complexes are the most commonly attempted targets of such modelling, followed by protein–nucleic acid complexes.

The ultimate goal of docking is the prediction of the three dimensional structure of the macromolecular complex of interest as it would occur in a living organism. Docking itself only produces plausible candidate structures. These candidates must be ranked using methods such as scoring functions to identify structures that are most likely to occur in nature.

The term "docking" originated in the late 1970s, with a more restricted meaning; then, "docking" meant refining a model of a complex structure by optimizing the separation between the interactors but keeping their relative orientations fixed. Later, the relative orientations of the interacting partners in the modelling was allowed to vary, but the internal geometry of each of the partners was held fixed. This type of modelling is sometimes referred to as "rigid docking". With further increases in computational power, it became possible to model changes in internal geometry of the interacting partners that may occur when a complex is formed. This type of modelling is referred to as "flexible docking".


The biological roles of most proteins, as characterized by which other macromolecules they interact with, are known at best incompletely. Even those proteins that participate in a well-studied biological process (e.g., the Krebs cycle) may have unexpected interaction partners or functions which are unrelated to that process.

In cases of known protein–protein interactions, other questions arise. Genetic diseases (e.g., cystic fibrosis) are known to be caused by misfolded or mutated proteins, and there is a desire to understand what, if any, anomalous protein–protein interactions a given mutation can cause. In the distant future, proteins may be designed to perform biological functions, and a determination of the potential interactions of such proteins will be essential.

For any given set of proteins, the following questions may be of interest, from the point of view of technology or natural history:

If they do bind,

  • What is the spatial configuration which they adopt in their bound state?
  • How strong or weak is their interaction?

If they do not bind,

  • Can they be made to bind by inducing a mutation?

Protein–protein docking is ultimately envisaged to address all these issues. Furthermore, since docking methods can be based on purely physical principles, even proteins of unknown function (or which have been studied relatively little) may be docked. The only prerequisite is that their molecular structure has been either determined experimentally, or can be estimated by a protein structure prediction technique.

Protein–nucleic acid interactions feature prominently in the living cell. Transcription factors, which regulate gene expression, and polymerases, which catalyse replication, are composed of proteins, and the genetic material they interact with is composed of nucleic acids. Modeling protein–nucleic acid complexes presents some unique challenges, as described below.


In the 1970s, complex modelling revolved around manually identifying features on the surfaces of the interactors, and interpreting the consequences for binding, function and activity; any computer programmes were typically used at the end of the modelling process, to discriminate between the relatively few configurations which remained after all the heuristic constraints had been imposed. The first use of computers was in a study on hemoglobin interaction in sickle-cell fibres.[1] This was followed in 1978 by work on the trypsin-BPTI complex.[2] Computers discriminated between good and bad models using a scoring function which rewarded large interface area, and pairs of molecules in contact but not occupying the same space. The computer used a simplified representation of the interacting proteins, with one interaction centre for each residue. Favorable electrostatic interactions, including hydrogen bonds, were identified by hand.[3]

In the early 1990s, more structures of complexes were determined, and available computational power had increased substantially. With the emergence of bioinformatics, the focus moved towards developing generalized techniques which could be applied to an arbitrary set of complexes at acceptable computational cost. The new methods were envisaged to apply even in the absence of phylogenetic or experimental clues; any specific prior knowledge could still be introduced at the stage of choosing between the highest ranking output models, or be framed as input if the algorithm catered for it. 1992 saw the publication of the correlation method,[4] an algorithm which used the fast Fourier transform to give a vastly improved scalability for evaluating coarse shape complementarity on rigid-body models. This was extended in 1997 to cover coarse electrostatics.[5]

In 1996 the results of the first blind trial were published,[6] in which six research groups attempted to predict the complexed structure of TEM-1 Beta-lactamase with Beta-lactamase inhibitor protein (BLIP). The exercise brought into focus the necessity of accommodating conformational change and the difficulty of discriminating between conformers. It also served as the prototype for the CAPRI assessment series, which debuted in 2001.

Rigid-body docking vs. flexible docking

If the bond angles, bond lengths and torsion angles of the components are not modified at any stage of complex generation, it is known as rigid body docking. A subject of speculation is whether or not rigid-body docking is sufficiently good for most docking. When substantial conformational change occurs within the components at the time of complex formation, rigid-body docking is inadequate. However, scoring all possible conformational changes is prohibitively expensive in computer time. Docking procedures which permit conformational change, or flexible docking procedures, must intelligently select small subset of possible conformational changes for consideration.


Successful docking requires two criteria:

  • Generating a set configurations which reliably includes at least one nearly correct one.
  • Reliably distinguishing nearly correct configurations from the others.

For many interactions, the binding site is known on one or more of the proteins to be docked. This is the case for antibodies and for competitive inhibitors. In other cases, a binding site may be strongly suggested by mutagenic or phylogenetic evidence. Configurations where the proteins interpenetrate severely may also be ruled out a priori.

After making exclusions based on prior knowledge or stereochemical clash, the remaining space of possible complexed structures must be sampled exhaustively, evenly and with a sufficient coverage to guarantee a near hit. Each configuration must be scored with a measure that is capable of ranking a nearly correct structure above at least 100,000 alternatives. This is a computationally intensive task, and a variety of strategies have been developed.

Reciprocal space methods

Each of the proteins may be represented as a simple cubic lattice. Then, for the class of scores which are discrete convolutions, configurations related to each other by translation of one protein by an exact lattice vector can all be scored almost simultaneously by applying the convolution theorem.[4] It is possible to construct reasonable, if approximate, convolution-like scoring functions representing both stereochemical and electrostatic fitness.

Reciprocal space methods have been used extensively for their ability to evaluate enormous numbers of configurations. They lose their speed advantage if torsional changes are introduced. Another drawback is that it is impossible to make efficient use of prior knowledge. The question also remains whether convolutions are too limited a class of scoring function to identify the best complex reliably.

Monte Carlo methods

In Monte Carlo, an initial configuration is refined by taking random steps which are accepted or rejected based on their induced improvement in score (see the Metropolis criterion), until a certain number of steps have been tried. The assumption is that convergence to the best structure should occur from a large class of initial configurations, only one of which needs to be considered. Initial configurations may be sampled coarsely, and much computation time can be saved. Because of the difficulty of finding a scoring function which is both highly discriminating for the correct configuration and also converges to the correct configuration from a distance, the use of two levels of refinement, with different scoring functions, has been proposed.[7] Torsion can be introduced naturally to Monte Carlo as an additional property of each random move.

Monte Carlo methods are not guaranteed to search exhaustively, so that the best configuration may be missed even using a scoring function which would in theory identify it. How severe a problem this is for docking has not been firmly established.


Scoring functions

To find a score which forms a consistent basis for selecting the best configuration, studies are carried out on a standard benchmark (see below) of protein–protein interaction cases. Scoring functions are assessed on the rank they assign to the best structure (ideally the best structure should be ranked 1), and on their coverage (the proportion of the benchmark cases for which they achieve an acceptable result). Types of scores studied include:

It is usual to create hybrid scores by combining one or more categories above in a weighted sum whose weights are optimized on cases from the benchmark. To avoid bias, the benchmark cases used to optimize the weights must not overlap with the cases used to make the final test of the score.

The ultimate goal in protein–protein docking is to select the ideal ranking solution according to a scoring scheme that would also give an insight into the affinity of the complex. Such a development would drive in silico protein engineering, computer-aided drug design and/or high-throughput annotation of which proteins bind or not (annotation of interactome). Several scoring functions have been proposed for binding affinity / free energy prediction.[7][8][9][10] However the correlation between experimentally determined binding affinities and the predictions of nine commonly used scoring functions have been found to be nearly orthogonal (R2 ~ 0).[11][12] It was also observed that some components of the scoring algorithms may display better correlation to the experimental binding energies than the full score, suggesting that a significantly better performance might be obtained by combining the appropriate contributions from different scoring algorithms. Experimental methods for the determination of binding affinities are: surface plasmon resonance (SPR), Förster resonance energy transfer, radioligand-based techniques, isothermal titration calorimetry (ITC), Microscale Thermophoresis (MST) or spectroscopic measurements and other fluorescence techniques.


A benchmark of 84 protein–protein interactions with known complexed structures has been developed for testing docking methods.[13] The set is chosen to cover a wide range of interaction types, and to avoid repeated features, such as the profile of interactors' structural families according to the SCOP database. Benchmark elements are classified into three levels of difficulty (the most difficult containing the largest change in backbone conformation). The protein–protein docking benchmark contains examples of enzyme-inhibitor, antigen-antibody and homomultimeric complexes.

A binding affinity benchmark has been based on the protein–protein docking benchmark.[11] 81 protein–protein complexes with known experimental affinities are included; these complexes span over 11 orders of magnitude in terms of affinity. Each entry of the benchmark includes several biochemical parameters associated with the experimental data, along with the method used to determine the affinity. This benchmark was used to assess the extent to which scoring functions could also predict affinities of macromolecular complexes.

This Benchmark was post-peer reviewed and significantly expanded.[14] The new set is diverse in terms of the biological functions it represents, with complexes that involve G-proteins and receptor extracellular domains, as well as antigen/antibody, enzyme/inhibitor, and enzyme/substrate complexes. It is also diverse in terms of the partners' affinity for each other, with Kd ranging between 10−5 and 10−14 M. Nine pairs of entries represent closely related complexes that have a similar structure, but a very different affinity, each pair comprising a cognate and a noncognate assembly. The unbound structures of the component proteins being available, conformation changes can be assessed. They are significant in most of the complexes, and large movements or disorder-to-order transitions are frequently observed. The set may be used to benchmark biophysical models aiming to relate affinity to structure in protein–protein interactions, taking into account the reactants and the conformation changes that accompany the association reaction, instead of just the final product.[14]

The CAPRI assessment

The Critical Assessment of PRediction of Interactions[15] is an ongoing series of events in which researchers throughout the community try to dock the same proteins, as provided by the assessors. Rounds take place approximately every 6 months. Each round contains between one and six target protein–protein complexes whose structures have been recently determined experimentally. The coordinates and are held privately by the assessors, with the cooperation of the structural biologists who determined them. The assessment of submissions is double blind.

CAPRI attracts a high level of participation (37 groups participated worldwide in round seven) and a high level of interest from the biological community in general. Although CAPRI results are of little statistical significance owing to the small number of targets in each round, the role of CAPRI in stimulating discourse is significant. (The CASP assessment is a similar exercise in the field of protein structure prediction).

See also


This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.

Copyright © World Library Foundation. All rights reserved. eBooks from Hawaii eBook Library are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.