Synapses, the connections between neurons, are not constant in strength but instead are plastic, adjusting due to various biophysical rules. While various types of plasticity have been identified, the interaction of different plasticity rules in a network of neurons in not well understood. We consider the behavior of two experimentally observed forms of synaptic plasticity, homeostatic and spike time dependent plasticity (STDP), and seek to exploit these rules to tune a model neural network towards a state in which activity throughout the network maintains dependence on its inputs. Our network consists of firing neurons arranged in layers connected in a sparse, feed-forward manner. Reliable propagation of layer-wide firing rates without all neurons firing together (synchrony) is required to achieve graded transmission of input rates, that is, signal transmission. Plasticity is introduced to our network through mathematical learning rules. One of our learning rules, STDP strengthens causal linkages by increasing or decreasing synaptic weights depending on the timing of pre- and postsynaptic activity. STDP by itself is often unstable, causing synaptic weights to grow without bound. We therefore attempt to stabilize STDP with a homeostatic rule, adjusting the weights of all of a neuron’s incoming synapses in order to achieve some intrinsic firing rate. While current literature has focused primarily on the behavior of these rules in single neurons or in rate-based systems, we examine the interactions between these rules in a spiking, stochastic system with dynamic inputs, and their effectiveness in tuning such a network towards reliable rate propagation. Our hope is to shed light on the development of cortical networks able to perform successful computation.