IEEE Transactions on Network Science and Engineering
https://www.computer.org/csdl/trans/tn/index.html
en-ushttps://s3.amazonaws.com/ieeecs.cdn.common/images/computer.org/CSLogo_dark.pngIEEE Computer Society Digital LibraryList of 100 recently published journal articles.
https://www.computer.org/csdl
Belief Dynamics in Social Networks: A Fluid-Based Analysis
https://www.computer.org/csdl/trans/tn/2018/04/08059782-abs.html
The advent and proliferation of social media have led to the development of mathematical models describing the evolution of beliefs/opinions in an ecosystem composed of socially interacting users. The goal is to gain insights into collective dominant social beliefs and into the impact of different components of the system, such as users’ interactions, while being able to predict users’ opinions. Following this thread, in this paper we consider a fairly general dynamical model of social interactions, which captures all the main features exhibited by a social system. For such model, by embracing a mean-field approach, we derive a diffusion differential equation that represents asymptotic belief dynamics, as the number of users grows large. We then analyze the steady-state behavior as well as the time dependent (transient) behavior of the system. In particular, for the steady-state distribution, we obtain simple closed-form expressions for a relevant class of systems, while we propose efficient semi-analytical techniques in the most general cases. At last, we develop an efficient semi-analytical method to analyze the dynamics of the users’ belief over time, which can be applied to a remarkably large class of systems.12/05/2018 2:04 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2760016A Resource Allocation Mechanism for Cloud Radio Access Network Based on Cell Differentiation and Integration Concept
https://www.computer.org/csdl/trans/tn/2018/04/08070355-abs.html
A Self-Organising Cloud Radio Access Network (C-RAN) is proposed, which dynamically adapt to varying capacity demands. The Base Band Units and Remote Radio Heads are scaled semi-statically based on the concept of cell differentiation and integration (CDI) while a dynamic load balancing is formulated as an integer-based optimisation problem with constraints. A Discrete Particle Swarm Optimisation (DPSO) is developed as an Evolutionary Algorithm to solve load balancing optimisation problem. The performance of DPSO is tested based on two problem scenarios and compared to an Exhaustive Search (ES) algorithm. The DPSO deliver optimum performance for small-scale networks and near optimum performance for large-scale networks. The DPSO has less complexity and is much faster than the ES algorithm. Computational results demonstrate significant throughput improvement in a CDI-enabled C-RAN compared to a fixed C-RAN, i.e., an average throughput increase of 45.53 and 42.102 percent, and a decrease of 23.149 and 20.903 percent in the average blocked users is experienced for Proportional Fair (PF) and Round Robin (RR) schedulers, respectively. A power model is proposed to estimate the overall power consumption of C-RAN. A decrease of <inline-formula><tex-math notation="LaTeX">$\approx 16\%$</tex-math><alternatives> <inline-graphic xlink:href="fakhri-ieq1-2754101.gif"/></alternatives></inline-formula> is estimated in a CDI-enabled C-RAN when compared to a fixed C-RAN, both serving the same geographical area.12/05/2018 2:04 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2754101Comparing the Effects of Failures in Power Grids Under the AC and DC Power Flow Models
https://www.computer.org/csdl/trans/tn/2018/04/08070362-abs.html
In this paper, we compare the effects of failures in power grids under the nonlinear AC and linearized DC power flow models. First, we numerically demonstrate that when there are no failures and the assumptions underlying the DC model are valid, the DC model approximates the AC model well in four considered test networks. Then, to evaluate the validity of the DC approximation upon failures, we numerically compare the effects of single line failures and the evolution of cascades under the AC and DC flow models using different metrics, such as yield (the ratio of the demand supplied at the end of the cascade to the initial demand). We demonstrate that the effects of a single line failure on the distribution of the flows on other lines are similar under the AC and DC models. However, the cascade simulations demonstrate that the assumptions underlying the DC model (e.g., ignoring power losses, reactive power flows, and voltage magnitude variations) can lead to inaccurate and overly optimistic cascade predictions. Particularly, in large networks the DC model tends to overestimate the yield. Hence, using the DC model for cascade prediction may result in a misrepresentation of the gravity of a cascade.12/05/2018 2:04 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2763746Detecting Cascades from Weak Signatures
https://www.computer.org/csdl/trans/tn/2018/04/08071015-abs.html
Inspired by cyber-security applications, we consider the problem of detecting an infection process in a network when the indication that any particular node is infected is extremely noisy. Instead of waiting for a single node to provide sufficient evidence that it is indeed infected, we take advantage of the graph structure to detect cascades of weak indications of failures. We view the detection problem as a hypothesis testing problem, devise a new inference algorithm, and analyze its false positive and false negative errors in the high noise regime. Extensive simulations show that our algorithm is able to obtain low errors in the high noise regime by taking advantage of cascading topology analysis.12/05/2018 2:04 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2764444Modelling Spreading Process Induced by Agent Mobility in Complex Networks
https://www.computer.org/csdl/trans/tn/2018/04/08074785-abs.html
Most conventional epidemic models assume contact-based contagion process. We depart from this assumption and study epidemic spreading process in networks caused by agents acting as carrier of infection. These agents traverse from origins to destinations following specific paths in a network and in the process, infecting the sites they travel across. We focus our work on the <italic>Susceptible-Infected-Removed</italic> (SIR) epidemic model and use continuous-time Markov chain analysis to model the impact of such agent mobility induced contagion mechanics by taking into account the state transitions of each node individually, as oppose to most conventional epidemic approaches which usually consider the mean aggregated behavior of all nodes. Our approach makes one mean field approximation to reduce complexity from exponential to polynomial. We study both network-wide properties such as epidemic threshold as well as individual node vulnerability under such agent assisted infection spreading process. Furthermore, we provide a first order approximation on the agents’ vulnerability since infection is bi-directional. We compare our analysis of spreading process induced by agent mobility against contact-based epidemic model via a case study on London Underground network, the second busiest metro system in Europe, with real dataset recording commuters’ activities in the system. We highlight the key differences in the spreading patterns between the contact-based versus agent assisted spreading models. Specifically, we show that our model predicts greater spreading radius than conventional contact-based models due to agents’ movements. Another interesting finding is that, in contrast to contact-based model where nodes located more centrally in a network are proportionally more prone to infection, our model shows no such strict correlation as in our model, nodes may not be highly susceptible even located at the heart of the network and vice versa.12/05/2018 2:04 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2764523Cascading Edge Failures: A Dynamic Network Process
https://www.computer.org/csdl/trans/tn/2018/04/08078210-abs.html
This paper studies a network process that can be used to model cascading failures in networks. The Dynamic Bond Percolation (DBP) process models, through stochastic local rules, the failure or recovery of an edge/link <inline-formula><tex-math notation="LaTeX">$(i,j)$</tex-math><alternatives> <inline-graphic xlink:href="zhang-ieq1-2764921.gif"/></alternatives></inline-formula> in a network. The probability that a working link fails or a failed link recovers may be independent of the state of other links <italic>or</italic> may be dependent locally on the state of neighboring links as described by a cascade function <inline-formula> <tex-math notation="LaTeX">$f$</tex-math><alternatives><inline-graphic xlink:href="zhang-ieq2-2764921.gif"/> </alternatives></inline-formula>. In applications, this means that failures or recovery of links may have a regional preference, or, alternatively, relationships between neighbors in the network can lead to changes in the links between neighbors of neighbors. This paper shows that the dynamic evolution of <inline-formula><tex-math notation="LaTeX"> $P(\mathbf{A},t)$</tex-math><alternatives><inline-graphic xlink:href="zhang-ieq3-2764921.gif"/></alternatives> </inline-formula>, the probability that the network is in some state <inline-formula><tex-math notation="LaTeX"> $\mathbf{A}$</tex-math><alternatives><inline-graphic xlink:href="zhang-ieq4-2764921.gif"/></alternatives> </inline-formula>, describing the collective states of all the links at time <inline-formula><tex-math notation="LaTeX"> $t$</tex-math><alternatives><inline-graphic xlink:href="zhang-ieq5-2764921.gif"/></alternatives></inline-formula>, converges to a stationary distribution. We use this distribution to study the emergence of global behaviors like consensus (i.e., catastrophic failure or full recovery of all the edges) or mixed (i.e., some failed and some working substructures). In particular, we show that, depending on the local dynamical rule, different network substructures, such as hub or triangle subgraphs, are more prone to failure.12/06/2018 10:32 am PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2764921Minimizing Social Cost of Vaccinating Network SIS Epidemics
https://www.computer.org/csdl/trans/tn/2018/04/08085142-abs.html
Reducing the economic costs (losses) as much as possible is one of the main goals of controlling virus spreading and worm propagation on complex networks. Taking into account the interactions and conflicts of interests among egoistic individuals (nodes) in a network, we introduce the zero-determinant (ZD) strategy into our proposed non-cooperative networking vaccination game with the economic incentive mechanism to optimize the social cost against a susceptible-infected-susceptible (SIS) epidemic process. It is critical to select an appropriate node as the administrator to implement the social-cost-based ZD strategy. We study the relationship between the coercive capability of the administrator and its degree in some representative networks, and find that the amount of control that the administrator can implement is negative correlated with its degree. After exploring the topological influence on the performance of the ZD strategy in vaccinating the SIS network epidemics, we find that the minimal social cost of the network increases with its average degree. We propose a coarse graining ZD strategy for a multi-community network with numerical simulations to verify the effectiveness of vaccinating the network epidemics.12/05/2018 2:04 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2766665Observational Equivalence in System Estimation: Contractions in Complex Networks
https://www.computer.org/csdl/trans/tn/2018/03/08019818-abs.html
Observability of complex systems/networks is the focus of this paper, which is shown to be closely related to the concept of <italic>contraction</italic>. Indeed, for observable network tracking it is necessary/sufficient to have one node in each contraction measured. Therefore, nodes in a contraction are equivalent to recover for loss of observability, implying that contraction size is a key factor for observability recovery. Here, using a polynomial order contraction detection algorithm, we analyze the distribution of contractions, studying its relation with key network properties. Our results show that contraction size is related to network clustering coefficient and degree heterogeneity. Particularly, in networks with power-law degree distribution, if the clustering coefficient is high there are less contractions with smaller size on average. The implication is that estimation/tracking of such systems requires less number of measurements, while their observational recovery is more restrictive in case of sensor failure. Further, in Small-World networks higher degree heterogeneity implies that there are more contractions with smaller size on average. Therefore, the estimation of representing system requires more measurements, and also the recovery of measurement failure is more limited. These results imply that one can tune the properties of synthetic networks to alleviate their estimation/observability recovery.08/31/2018 2:05 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2746570Finite-Time Passivity of Coupled Neural Networks with Multiple Weights
https://www.computer.org/csdl/trans/tn/2018/03/08023845-abs.html
This paper respectively studies finite-time passivity of multi-weighted coupled neural networks (MWCNNs) with and without coupling delays. First, based on those existing passivity definitions, several new concepts about finite-time passivity are presented. By exploiting these definitions of finite-time passivity and designing appropriate controllers, we investigate the passivity of MWCNNs with and without coupling delays. In addition, some sufficient conditions to guarantee finite-time synchronization of MWCNNs with constant and delayed couplings are obtained under the condition that the MWCNNs is finite-time passive. Finally, two examples are also given to verify the proposed finite-time passivity and synchronization criteria.08/31/2018 2:05 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2746759A Mathematical Theory for Multistage Battery Switching Networks
https://www.computer.org/csdl/trans/tn/2018/03/08027063-abs.html
In this paper, we propose a mathematical theory for multistage battery switching networks. The theory aims to address several design issues in managing a large-scale battery system, including flexibility, reliability, efficiency, complexity (scalability) and sustainability. Our multistage battery switching network is constructed by a concatenation of various rectangular “shapes” of battery packs. The shape of each battery pack is specified by its voltage and its capacity. We show that our multistage battery switching network can support a maximum number of <inline-formula><tex-math notation="LaTeX">$L_{\max}$</tex-math><alternatives> <inline-graphic xlink:href="chang-ieq1-2749327.gif"/></alternatives></inline-formula> loads under the constraint that the total voltages of these loads do not exceed a design constant <inline-formula><tex-math notation="LaTeX"> $V_{\max}$</tex-math><alternatives><inline-graphic xlink:href="chang-ieq2-2749327.gif"/></alternatives></inline-formula> . Moreover, the voltage of each battery pack can be determined <italic>optimally</italic> by solving a Simultaneous Integer Representation (SIR) problem. To determine the capacity of each battery pack, we propose a max-min fairness battery allocation scheme, and show by computer simulations that such a scheme outperforms the uniform battery allocation scheme. We also propose a fault tolerant battery switching network that can still be operated properly even after <inline-formula><tex-math notation="LaTeX">$F_{\max}$</tex-math><alternatives> <inline-graphic xlink:href="chang-ieq3-2749327.gif"/></alternatives></inline-formula> battery packs fail. Such a fault tolerant battery switching network enables a battery system to implement the Largest Remaining Capacity First (LRCF) policy that does not require the knowledge of the load profile.08/31/2018 2:05 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2749327Isomorphisms in Multilayer Networks
https://www.computer.org/csdl/trans/tn/2018/03/08039503-abs.html
We extend the concept of graph isomorphisms to multilayer networks with any number of “aspects” (i.e., types of layering). In developing this generalization, we identify multiple types of isomorphisms. For example, in multilayer networks with a single aspect, permuting vertex labels, layer labels, and both vertex labels and layer labels each yield different isomorphism relations between multilayer networks. Multilayer network isomorphisms lead naturally to defining isomorphisms in any of the numerous types of networks that can be represented as a multilayer network, and we thereby obtain isomorphisms for multiplex networks, temporal networks, networks with both of these features, and more. We reduce each of the multilayer network isomorphism problems to a graph isomorphism problem, where the size of the graph isomorphism problem grows linearly with the size of the multilayer network isomorphism problem. One can thus use software that has been developed to solve graph isomorphism problems as a practical means for solving multilayer network isomorphism problems. Our theory lays a foundation for extending many network analysis methods—including motifs, graphlets, structural roles, and network alignment—to any multilayer network.08/31/2018 2:05 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2753963Weighted Bearing-Compass Dynamics: Edge and Leader Selection
https://www.computer.org/csdl/trans/tn/2018/03/08047285-abs.html
This paper considers the design and effective interfaces of a distributed robotic formation running planar weighted bearing-compass dynamics. We present results that support methodologies to construct formation topologies using submodular optimization techniques. Further, a convex optimization framework is developed for the selection of edge weights which increase performance. We explore a method to select leader agents that can translate and scale the formation, and a corresponding controller that promotes the formation keeping its overall shape intact during manipulation. The results are supported with examples that illustrate the approach and their differing levels of performance.08/31/2018 2:05 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2754944Provision of Public Goods on Networks: On Existence, Uniqueness, and Centralities
https://www.computer.org/csdl/trans/tn/2018/03/08047288-abs.html
We consider the provision of public goods on networks of strategic agents. We study different effort outcomes of these network games, namely, the Nash equilibria, Pareto efficient effort profiles, and semi-cooperative equilibria (resulting from interactions among coalitions of agents). We identify necessary and sufficient conditions on the structure of the network for the uniqueness of the Nash equilibrium by using a connection between these outcomes and linear complementarity problems. We show that our finding unifies, and extends, existing results in the literature. We also identify conditions for the existence of Nash equilibria for two subclasses of games at the two extremes of our model, namely games of strategic complements and games of strategic substitutes. We provide a graph-theoretical interpretation of agents’ efforts at the Nash equilibrium, as well as the Pareto efficient outcomes and semi-cooperative equilibria, by linking an agent's decision to her centrality in the interaction network. Using this connection, we separate the effects of incoming and outgoing edges on agents’ efforts and uncover an alternating effect over walks of different length in the network.08/31/2018 2:05 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2755003Recovering Asymmetric Communities in the Stochastic Block Model
https://www.computer.org/csdl/trans/tn/2018/03/08053805-abs.html
We consider the sparse stochastic block model in the case where the degrees are uninformative. The case where the two communities have approximately the same size has been extensively studied and we concentrate here on the community detection problem in the case of unbalanced communities. In this setting, spectral algorithms based on the non-backtracking matrix are known to solve the community detection problem (i.e., do strictly better than a random guess) when the signal is sufficiently large namely above the so-called Kesten-Stigum threshold. In this regime and when the average degree tends to infinity, we show that if the community of a vanishing fraction of the vertices is revealed, then a local algorithm (belief propagation) is optimal down to Kesten-Stigum threshold and we quantify explicitly its performance. Below the Kesten-Stigum threshold, we show that, in the large degree limit, there is a second threshold called the spinodal curve below which, the community detection problem is not solvable. The spinodal curve is equal to the Kesten-Stigum threshold when the fraction of vertices in the smallest community is above <inline-formula><tex-math notation="LaTeX">$p^*=\frac{1}{2}-\frac{1}{2\sqrt{3}}$</tex-math><alternatives> <inline-graphic xlink:href="miolane-ieq1-2758201.gif"/></alternatives></inline-formula>, so that the Kesten-Stigum threshold is the threshold for solvability of the community detection in this case. However when the smallest community is smaller than <inline-formula><tex-math notation="LaTeX">$p^*$</tex-math><alternatives> <inline-graphic xlink:href="miolane-ieq2-2758201.gif"/></alternatives></inline-formula>, the spinodal curve only provides a lower bound on the threshold for solvability. In the regime below the Kesten-Stigum bound and above the spinodal curve, we also characterize the performance of best local algorithms as a function of the fraction of revealed vertices. Our proof relies on a careful analysis of the associated reconstruction problem on trees which might be of independent interest. In particular, we show that the spinodal curve corresponds to the reconstruction threshold on the tree.08/31/2018 2:05 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2758201A Robust Advantaged Node Placement Strategy for Sparse Network Graphs
https://www.computer.org/csdl/trans/tn/2018/02/07999269-abs.html
Establishing robust connectivity in heterogeneous networks (HetNets) is an important yet challenging problem. For a HetNet accommodating a large number of nodes, establishing perturbation-invulnerable connectivity is of utmost importance. This paper provides a robust advantaged node placement strategy best suited for sparse network graphs. In order to offer connectivity robustness, this paper models the communication range of an advantaged node with a hexagon embedded within a circle representing the physical range of a node. Consequently, the proposed node placement method of this paper is based on a so-called hexagonal coordinate system (HCS) in which we develop an extended algebra. We formulate a class of geometric distance optimization problems aiming at establishing robust connectivity of a graph of multiple clusters of nodes. After showing that our formulated problem is NP-hard, we utilize HCS to efficiently solve an approximation of the problem. First, we show that our solution closely approximates an exhaustive search solution approach for the originally formulated NP-hard problem. Then, we illustrate its advantages in comparison with other alternatives through experimental results capturing advantaged node cost, runtime, and robustness characteristics. The results show that our algorithm is most effective in sparse networks for which we derive classification thresholds.06/05/2018 4:55 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2734111Preventive and Reactive Cyber Defense Dynamics Is Globally Stable
https://www.computer.org/csdl/trans/tn/2018/02/08000326-abs.html
The recently proposed <italic>cybersecurity dynamics</italic> approach aims to understand cybersecurity from a holistic perspective by modeling the evolution of the global cybersecurity state. These models describe the interactions between the various kinds of cyber attacks and the various kinds of cyber defenses that take place in complex networks. In this paper, we study a particular kind of cybersecurity dynamics caused by the interactions between two classes of attacks (called push-based attacks and pull-based attacks) and two classes of defenses (called preventive and reactive defenses). The dynamics was previously shown to be globally stable in a <italic>special </italic> regime of the parameter universe of a model with node-independent and edge-independent parameters, but little is known beyond this regime. In this paper, we prove that the dynamics is globally stable in the <italic>entire </italic> parameter universe of a more general model with node-dependent and edge-dependent parameters. This means that the dynamics <italic>always</italic> converges to a unique equilibrium. We also prove that the dynamics converges <italic>exponentially</italic> to the equilibrium except for a particular parameter regime, in which the dynamics converges <italic>polynomially</italic>. Since it is often difficult to compute the equilibrium, we propose bounds of the equilibrium and numerically show that these bounds are tighter than those proposed in the literature.06/04/2018 2:02 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2734904Cascading Failures in Interdependent Systems: Impact of Degree Variability and Dependence
https://www.computer.org/csdl/trans/tn/2018/02/08008820-abs.html
We study cascading failures in a system comprising interdependent networks/systems, in which nodes rely on other nodes both in the same system and in other systems to perform their function. The (inter-)dependence among nodes is modeled using a dependence graph, where the degree vector of a node determines the number of other nodes it can potentially cause to fail in <italic>each</italic> system through aforementioned dependency. In particular, we examine the impact of the variability and dependence properties of node degrees on the probability of cascading failures. We show that larger variability in node degrees hampers widespread failures in the system, starting with <italic>random </italic> failures. Similarly, positive correlations in node degrees make it harder to set off an epidemic of failures, thereby rendering the system more robust against random failures.06/05/2018 4:55 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2738843Analysis of Partial Diffusion LMS for Adaptive Estimation Over Networks with Noisy Links
https://www.computer.org/csdl/trans/tn/2018/02/08013733-abs.html
In partial diffusion-based least mean square (PDLMS) scheme, each node shares a part of its intermediate estimate vector with its neighbors at each iteration. In this paper, besides studying the general PDLMS scheme, we figure out how the noisy links deteriorate the network performance during the exchange of weight estimates. We investigate the steady state mean square deviation (MSD) and derive a theoretical expression for it. We also derive the mean and mean-square convergence conditions for the PDLMS algorithm in the presence of noisy links. Our analysis reveals that unlike the PDLMS with ideal links, the steady-state network MSD performance of the PDLMS algorithm is not improved as the number of entries communicated at each iteration increases. Strictly speaking, the noisy links condition imposes more complexity to the MSD derivation that has a noticeable effect on the overall performance. This term violates the trade-off between the communication cost and the estimation performance of the networks in comparison with the ideal links. Our simulation results substantiate the effect of noisy links on PDLMS algorithm and verify the theoretical findings. They match well with theory.06/05/2018 4:54 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2742360A Multi-Objective Evolutionary Algorithm for Promoting the Emergence of Cooperation and Controllable Robustness on Directed Networks
https://www.computer.org/csdl/trans/tn/2018/02/08013820-abs.html
The directedness of links is of significance in complex networks, and much attention has been paid to study the dynamics of directed networks recently. In networked systems, where the emergence of cooperation and robustness are two hot issues in recent decades. Previous studies have indicated that the structures for promoting these two properties are opposite, which also reveals the great impact of structures on the functionalities of networks. Moreover, several realistic problems also reflect the importance of simultaneously promoting the robustness and cooperation maintaining ability on directed networks, however, few studies have focused on solving this urgent problem. Therefore, in this paper, concentrating on optimizing the cooperation maintaining ability together with controllable robustness on directed networks, we first model this issue as a multi-objective optimization problem, and then a multi-objective evolutionary algorithm, labeled as <inline-formula><tex-math notation="LaTeX"> ${\mathrm{MOEA}-\mathrm{Net}}_{{\mathrm{cc}}}$</tex-math><alternatives> <inline-graphic xlink:href="wang-ieq1-2742522.gif"/></alternatives></inline-formula>, has been devised to solve this problem. In the experiments, the performance of <inline-formula><tex-math notation="LaTeX"> ${\mathrm{MOEA}-\mathrm{Net}}_{{\mathrm{cc}}}$</tex-math><alternatives> <inline-graphic xlink:href="wang-ieq2-2742522.gif"/></alternatives></inline-formula> is validated on both synthetic and real networks, and the results show that <inline-formula><tex-math notation="LaTeX"> ${\mathrm{MOEA}-\mathrm{Net}}_{{\mathrm{cc}}}$</tex-math><alternatives> <inline-graphic xlink:href="wang-ieq3-2742522.gif"/></alternatives></inline-formula> can not only achieve balanced optimal results without changing degree distribution of networks; but also create diverse Pareto fronts, which provide various potential candidates for decision makers to deal with social and economic dilemmas.06/05/2018 4:55 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2742522Networking the Boids Is More Robust Against Adversarial Learning
https://www.computer.org/csdl/trans/tn/2018/02/08017654-abs.html
Swarm behavior using Boids-like models has been studied primarily using close-proximity spatial sensory information (e.g., vision range). In this study, we propose a novel approach in which the classic definition of boids’ neighborhood that relies on sensory perception and Euclidian space locality is replaced with graph-theoretic network-based proximity mimicking communication and social networks. We demonstrate that networking the boids leads to faster swarming and higher quality of the formation. We further investigate the effect of adversarial learning, whereby an observer attempts to reverse engineer the dynamics of the swarm through observing its behavior. The results show that networking the swarm demonstrated a more robust approach against adversarial learning than local-proximity neighborhood structure.06/05/2018 4:54 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2745108Algebraic Connectivity Under Site Percolation in Finite Weighted Graphs
https://www.computer.org/csdl/trans/tn/2018/02/08052519-abs.html
We study the behavior of <italic>algebraic connectivity</italic> in a weighted graph that is subject to <italic>site percolation</italic>, random deletion of the vertices. Using a refined concentration inequality for random matrices we show in our main theorem that the (augmented) Laplacian of the percolated graph concentrates around its expectation. This concentration bound then provides a lower bound on the algebraic connectivity of the percolated graph. As a special case for <inline-formula><tex-math notation="LaTeX">$(n,d,\lambda)$</tex-math><alternatives> <inline-graphic xlink:href="bahmani-ieq1-2757762.gif"/></alternatives></inline-formula>-graphs (i.e., <inline-formula> <tex-math notation="LaTeX">$d$</tex-math><alternatives><inline-graphic xlink:href="bahmani-ieq2-2757762.gif"/> </alternatives></inline-formula>-regular graphs on <inline-formula><tex-math notation="LaTeX">$n$</tex-math> <alternatives><inline-graphic xlink:href="bahmani-ieq3-2757762.gif"/></alternatives></inline-formula> vertices with all non-trivial eigenvalues of the adjacency matrix less than <inline-formula><tex-math notation="LaTeX">$\lambda$ </tex-math><alternatives><inline-graphic xlink:href="bahmani-ieq4-2757762.gif"/></alternatives></inline-formula> in magnitude) our result shows that, with high probability, the graph remains connected under a homogeneous site percolation with survival probability <inline-formula><tex-math notation="LaTeX">$p\geq1-C_{1}n^{-C_{2}/d}$</tex-math> <alternatives><inline-graphic xlink:href="bahmani-ieq5-2757762.gif"/></alternatives></inline-formula> with <inline-formula><tex-math notation="LaTeX">$C_{1}$</tex-math><alternatives> <inline-graphic xlink:href="bahmani-ieq6-2757762.gif"/></alternatives></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$C_{2}$</tex-math><alternatives><inline-graphic xlink:href="bahmani-ieq7-2757762.gif"/> </alternatives></inline-formula> depending only on <inline-formula><tex-math notation="LaTeX">$\lambda /d$</tex-math> <alternatives><inline-graphic xlink:href="bahmani-ieq8-2757762.gif"/></alternatives></inline-formula>.06/05/2018 4:54 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2757762Incompatibility Boundaries for Properties of Community Partitions
https://www.computer.org/csdl/trans/tn/2018/01/07859369-abs.html
We prove the incompatibility of certain desirable properties of community partition quality functions. Our results generalize the impossibility result of [Kleinberg 2003] by considering sets of weaker properties. In particular, we use an alternative notion to solve the central issue of the consistency property. (The latter means that modifying the graph in a way consistent with a partition should not have counterintuitive effects). Our results clearly show that community partition methods should not be expected to perfectly satisfy all ideally desired properties. We then proceed to show that this incompatibility no longer holds when slightly relaxed versions of the properties are considered, and we provide examples of simple quality functions satisfying these relaxed properties. An experimental study of these quality functions shows a behavior comparable to established methods in some situations, but more debatable results in others. This suggests that defining a notion of good partition in communities probably requires imposing additional properties.03/07/2018 2:04 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2671905A Micro-Foundation of Social Capital in Evolving Social Networks
https://www.computer.org/csdl/trans/tn/2018/01/07986996-abs.html
A social network confers benefits and advantages on individuals (and on groups); the literature refers to these benefits and advantages as <italic>social capital</italic>. An individual’s social capital depends on its position in the network and on the shape of the network—but positions in the network and the shape of the network are determined endogenously and change as the network forms and evolves. This paper presents a micro-founded mathematical model of the evolution of a social network and of the social capital of individuals within the network. The evolution of the network and of social capital are driven by exogenous and endogenous processes—entry, meeting, linking—that have both random and deterministic components. These processes are influenced by the extent to which individuals are <italic>homophilic</italic> (prefer others of their own type), <italic>structurally opportunistic</italic> (prefer neighbors of neighbors to strangers), <italic>socially gregarious</italic> (desire more or fewer connections) and by the <italic>distribution of types</italic> in the society. In the analysis, we identify different kinds of social capital: <italic>bonding capital</italic> refers to links to others; <italic>popularity capital</italic> refers to links from others; <italic>bridging capital</italic> refers to connections between others. We show that each form of capital plays a different role and is affected differently by the characteristics of the society. Bonding capital is created by forming a circle of connections; homophily increases bonding capital because it makes this circle of connections more homogeneous. Popularity capital leads to <italic>preferential attachment</italic> : individuals who become popular tend to become more and more popular because others are more likely to link to them. Homophily creates inequality in the popularity capital attained by different social categories; more gregarious types of agents are more likely to become popular. However, in homophilic societies, individuals who belong to less gregarious, less opportunistic, or major types are likely to be more <italic>central</italic> in the network and thus acquire a bridging capital. And, while extreme homophily maximizes an individual’s bonding capital, it also creates <italic>structural holes</italic> in the network, which hinder the exchange of ideas and information across social categories. Such structural holes represent a potential source of bridging capital: non-homophilic (tolerant or open-minded) individuals can fill these holes and broker interactions at the interface between different social categories.03/09/2018 10:43 am PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2729782Pattern Formation over Multigraphs
https://www.computer.org/csdl/trans/tn/2018/01/07987756-abs.html
Two of the most common pattern formation mechanisms are Turing-patterning in reaction-diffusion systems and lateral inhibition of neighboring cells. In this paper, we introduce a broad dynamical model of interconnected modules to study the emergence of patterns, with the above mentioned two mechanisms as special cases. Our results do not restrict the number of modules or their complexity, allow multiple layers of communication channels with possibly different interconnection structure, and do not assume symmetric connections between two connected modules. Leveraging only the static input/output properties of the subsystems and the spectral properties of the interconnection matrices, we characterize the stability of the homogeneous fixed points as well as sufficient conditions for the emergence of spatially non-homogeneous patterns. To obtain these results, we rely on properties of the graphs together with tools from monotone systems theory. As application examples, we consider patterning in neural networks, in reaction-diffusion systems, and in contagion processes over random graphs.03/09/2018 10:43 am PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2730261Information Flow in a Model of Policy Diffusion: An Analytical Study
https://www.computer.org/csdl/trans/tn/2018/01/07990161-abs.html
Networks are pervasive across science and engineering, but seldom do we precisely know their topology. The information-theoretic notion of transfer entropy has been recently proposed as a potent means to unveil connectivity patterns underlying collective dynamics of complex systems. By pairwise comparing time series of units in the network, transfer entropy promises to determine whether the units are connected or not. Despite considerable progress, our understanding of transfer entropy-based network reconstruction largely relies on computer simulations, which hamper the precise and systematic assessment of the accuracy of the approach. In this paper, we present an analytical study of the information flow in a network model of policy diffusion, thereby establishing closed-form expressions for the transfer entropy between any pair of nodes. The model consists of a finite-state ergodic Markov chain, for which we compute the joint probability distribution in the stationary limit. Our analytical results offer a compelling evidence for the potential of transfer entropy to assist in the process of network reconstruction, clarifying the role and extent of tenable confounds associated with spurious connections between nodes.03/09/2018 10:43 am PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2731212Scheduling Resource-Bounded Monitoring Devices for Event Detection and Isolation in Networks
https://www.computer.org/csdl/trans/tn/2018/01/07997748-abs.html
In networked systems, monitoring devices such as sensors are typically deployed to monitor various target locations. Targets are the points in the physical space at which events of some interest, such as random faults or attacks, can occur. Most often, monitoring devices have limited energy supplies, and they can operate for a limited duration. As a result, energy-efficient monitoring of target locations through a set of monitoring devices with limited energy supplies is a crucial problem in networked systems. In this paper, we study optimal scheduling of monitoring devices to maximize network coverage for detecting and isolating events on targets for a given network lifetime. The monitoring devices considered could remain active only for a fraction of the overall network lifetime. We formulate the problem of scheduling of monitoring devices as a graph labeling problem, which unlike other existing solutions, allows us to directly utilize the underlying network structure to explore the trade-off between coverage and network lifetime. In this direction, first we present a greedy heuristic, and then a game-theoretic solution to the graph labeling problem. The proposed setup can be used to simultaneously solve the scheduling and placement of monitoring devices, which, as our simulations illustrate, gives improved performance as compared to separately solving the placement and scheduling problems. Finally, we illustrate our results on various networks, including real-world water distribution networks and random geometric networks.03/05/2018 2:01 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2734048A Bi-Virus Competing Spreading Model with Generic Infection Rates
https://www.computer.org/csdl/trans/tn/2018/01/07997937-abs.html
Due to widespread applications, the multi-virus competing spreading dynamics has recently aroused considerable interests. To our knowledge, all previous competing spreading models assume infection rates that are each linear in the virus occupancy probabilities of the individuals in a population. As linear infection rates are overestimation of real infection rates, in some situations these models cannot accurately predict the spreading process of multiple competing viruses. This work takes the first step toward enhancing the accuracy of multi-virus competing spreading models. A continuous-time bilayer-network-based bi-virus competing spreading model with generic infection rates is proposed. Criteria for the extinction of both viruses and for the survival of only one virus are presented, respectively. Numerical examples show that (1) if the generic bi-virus spreading model with linear infection rates predicts that the fraction of nodes infected with some virus would approach zero, the prediction of the fraction is accurate, and (2) if the scenario-relevant generic infection rates could be estimated accurately, the resulting model would be able to accurately forecast the evolutionary process of a pair of competing viruses.03/09/2018 10:43 am PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.27340752018 Reviewers List<formula formulatype="inline"><tex Notation="TeX"/></formula>
https://www.computer.org/csdl/trans/tn/2018/01/08306533-abs.html
03/09/2018 10:43 am PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2018.27980782017 Index IEEE Transactions on Network Science and Engineering Vol. 4
https://www.computer.org/csdl/trans/tn/2018/01/08306548-abs.html
03/09/2018 10:43 am PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2018.2799542Editorial: Message from the Editor-in-Chief
https://www.computer.org/csdl/trans/tn/2018/01/08306587-abs.html
03/09/2018 10:43 am PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2018.2801198Moment-Based Spectral Analysis of Random Graphs with Given Expected Degrees
https://www.computer.org/csdl/trans/tn/2017/04/07940092-abs.html
We analyze the eigenvalues of a random graph ensemble, proposed by Chung and Lu, in which a given sequence of <italic>expected</italic> degrees, denoted by <inline-formula><tex-math notation="LaTeX">$\overline{w}_n = (w^{(n)}_1,\ldots, w^{(n)}_n)$</tex-math><alternatives><inline-graphic xlink:href="rahimian-ieq1-2712064.gif"/> </alternatives></inline-formula>, is prescribed on the <inline-formula><tex-math notation="LaTeX">$n$</tex-math> <alternatives><inline-graphic xlink:href="rahimian-ieq2-2712064.gif"/></alternatives></inline-formula> nodes of a random graph. We focus on the eigenvalues of the normalized (random) adjacency matrix of the graph ensemble, defined as <inline-formula><tex-math notation="LaTeX">$\mathbf {A}_n$</tex-math><alternatives> <inline-graphic xlink:href="rahimian-ieq3-2712064.gif"/></alternatives></inline-formula> <inline-formula> <tex-math notation="LaTeX">$=$</tex-math><alternatives><inline-graphic xlink:href="rahimian-ieq4-2712064.gif"/> </alternatives></inline-formula> <inline-formula><tex-math notation="LaTeX">$\sqrt{n\rho _n}[\mathbf {a}^{(n)}_{i,j}]_{i,j=1}^{n}$</tex-math><alternatives><inline-graphic xlink:href="rahimian-ieq5-2712064.gif"/> </alternatives></inline-formula>, where <inline-formula><tex-math notation="LaTeX">$\rho _n = 1/\sum _{i=1}^{n} w^{(n)}_i$</tex-math><alternatives><inline-graphic xlink:href="rahimian-ieq6-2712064.gif"/></alternatives> </inline-formula> and <inline-formula><tex-math notation="LaTeX">$\mathbf {a}^{(n)}_{i,j} =1$</tex-math><alternatives> <inline-graphic xlink:href="rahimian-ieq7-2712064.gif"/></alternatives></inline-formula> if there is an edge between the nodes <inline-formula><tex-math notation="LaTeX">$\lbrace i,j\rbrace$</tex-math><alternatives> <inline-graphic xlink:href="rahimian-ieq8-2712064.gif"/></alternatives></inline-formula>, 0 otherwise. The <italic> empirical spectral distribution</italic> of <inline-formula><tex-math notation="LaTeX">$\mathbf {A}_n$</tex-math> <alternatives><inline-graphic xlink:href="rahimian-ieq9-2712064.gif"/></alternatives></inline-formula>, denoted by <inline-formula><tex-math notation="LaTeX">$\mathbf {F}_n(\mathord {\cdot})$</tex-math><alternatives> <inline-graphic xlink:href="rahimian-ieq10-2712064.gif"/></alternatives></inline-formula>, is the empirical measure putting a mass <inline-formula><tex-math notation="LaTeX">$1/n$</tex-math><alternatives> <inline-graphic xlink:href="rahimian-ieq11-2712064.gif"/></alternatives></inline-formula> at each of the <inline-formula><tex-math notation="LaTeX">$n$</tex-math><alternatives> <inline-graphic xlink:href="rahimian-ieq12-2712064.gif"/></alternatives></inline-formula> real eigenvalues of the symmetric matrix <inline-formula><tex-math notation="LaTeX">$\mathbf {A}_n$</tex-math><alternatives> <inline-graphic xlink:href="rahimian-ieq13-2712064.gif"/></alternatives></inline-formula>. Under some technical conditions on the expected degree sequence, we show that with probability one <inline-formula> <tex-math notation="LaTeX">$\mathbf {F}_n(\mathord {\cdot})$</tex-math><alternatives> <inline-graphic xlink:href="rahimian-ieq14-2712064.gif"/></alternatives></inline-formula> converges weakly to a <italic> deterministic</italic> distribution <inline-formula><tex-math notation="LaTeX">$F(\mathord {\cdot})$</tex-math> <alternatives><inline-graphic xlink:href="rahimian-ieq15-2712064.gif"/></alternatives></inline-formula> as <inline-formula><tex-math notation="LaTeX">$n\rightarrow \infty$</tex-math><alternatives> <inline-graphic xlink:href="rahimian-ieq16-2712064.gif"/></alternatives></inline-formula>. Furthermore, we fully characterize this deterministic distribution by providing explicit closed-form expressions for the moments of <inline-formula><tex-math notation="LaTeX">$F(\mathord {\cdot})$</tex-math><alternatives> <inline-graphic xlink:href="rahimian-ieq17-2712064.gif"/></alternatives></inline-formula>. We illustrate our results with two well-known degree distributions, namely, the power-law and the exponential degree distributions. Based on our results, we provide significant insights about the bulk behavior of the eigenvalue spectrum; in particular, we analyze the quasi-triangular spectral distribution of power-law networks.12/04/2017 2:13 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2712064Stochastic Subgradient Algorithms for Strongly Convex Optimization Over Distributed Networks
https://www.computer.org/csdl/trans/tn/2017/04/07944588-abs.html
We study diffusion and consensus based optimization of a sum of unknown convex objective functions over distributed networks. The only access to these functions is through stochastic gradient oracles, each of which is only available at a different node; and a limited number of gradient oracle calls is allowed at each node. In this framework, we introduce a convex optimization algorithm based on stochastic subgradient descent (SSD) updates. We use a carefully designed time-dependent weighted averaging of the SSD iterates, which yields a convergence rate of <inline-formula> <tex-math notation="LaTeX">$O\left(\frac{N\sqrt{N}}{(1-\sigma)T}\right)$</tex-math><alternatives> <inline-graphic xlink:href="sayin-ieq1-2713396.gif"/></alternatives></inline-formula> after <inline-formula> <tex-math notation="LaTeX">$T$</tex-math><alternatives><inline-graphic xlink:href="sayin-ieq2-2713396.gif"/> </alternatives></inline-formula> gradient updates for each node on a network of <inline-formula> <tex-math notation="LaTeX">$N$</tex-math><alternatives><inline-graphic xlink:href="sayin-ieq3-2713396.gif"/> </alternatives></inline-formula> nodes, where <inline-formula><tex-math notation="LaTeX">$0\leq \sigma <1$</tex-math> <alternatives><inline-graphic xlink:href="sayin-ieq4-2713396.gif"/></alternatives></inline-formula> denotes the second largest singular value of the communication matrix. This rate of convergence matches the performance lower bound up to constant terms. Similar to the SSD algorithm, the computational complexity of the proposed algorithm also scales linearly with the dimensionality of the data. Furthermore, the communication load of the proposed method is the same as the communication load of the SSD algorithm. Thus, the proposed algorithm is highly efficient in terms of complexity and communication load. We illustrate the merits of the algorithm with respect to the state-of-art methods over benchmark real life data sets.12/04/2017 2:13 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2713396Network Maximal Correlation
https://www.computer.org/csdl/trans/tn/2017/04/07953578-abs.html
We introduce Network Maximal Correlation (NMC) as a multivariate measure of nonlinear association among random variables. NMC is defined via an optimization that infers transformations of variables by maximizing aggregate inner products between transformed variables. For finite discrete and jointly Gaussian random variables, we characterize a solution of the NMC optimization using basis expansion of functions over appropriate basis functions. For finite discrete variables, we propose an algorithm based on alternating conditional expectation to determine NMC. Moreover we propose a distributed algorithm to compute an approximation of NMC for large and dense graphs using graph partitioning. For finite discrete variables, we show that the probability of discrepancy greater than any given level between NMC and NMC computed using empirical distributions decays exponentially fast as the sample size grows. For jointly Gaussian variables, we show that under some conditions the NMC optimization is an instance of the Max-Cut problem. We then illustrate an application of NMC in inference of graphical model for bijective functions of jointly Gaussian variables. Finally, we show NMC’s utility in a data application of learning nonlinear dependencies among genes in a cancer dataset.12/04/2017 2:12 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2716966How Complex Contagions Spread Quickly in Preferential Attachment Models and Other Time-Evolving Networks
https://www.computer.org/csdl/trans/tn/2017/04/07955096-abs.html
The <italic><inline-formula><tex-math notation="LaTeX">$k$</tex-math><alternatives> <inline-graphic xlink:href="ghasemiesfeh-ieq1-2718024.gif"/></alternatives></inline-formula>-complex contagion</italic> model is a social contagion model which describes the diffusion of behaviors in networks where the successful adoption of a behavior requires influence from multiple contacts. It has been argued that complex contagions better model behavioral changes such as adoption of new beliefs, fashion trends or expensive technology innovations. A contagion in this model starts from a set of initially infected seeds and progresses in rounds. In any round any node with at least <inline-formula><tex-math notation="LaTeX">$k>1$</tex-math><alternatives> <inline-graphic xlink:href="ghasemiesfeh-ieq2-2718024.gif"/></alternatives></inline-formula> infected neighbors becomes infected. Previous work on <inline-formula><tex-math notation="LaTeX">$k$</tex-math><alternatives> <inline-graphic xlink:href="ghasemiesfeh-ieq3-2718024.gif"/></alternatives></inline-formula>-complex contagions was focused on networks with uniform degree distributions. However, many real-world network topologies have non-uniform degree distribution and evolve over time. We analyze the spreading rate of a <inline-formula> <tex-math notation="LaTeX">$k$</tex-math><alternatives><inline-graphic xlink:href="ghasemiesfeh-ieq4-2718024.gif"/> </alternatives></inline-formula>-complex contagion in a general family of time-evolving networks which includes the preferential attachment (PA) model. We prove that if the initial seeds are chosen as the <inline-formula> <tex-math notation="LaTeX">$k$</tex-math><alternatives><inline-graphic xlink:href="ghasemiesfeh-ieq5-2718024.gif"/> </alternatives></inline-formula> earliest nodes in a network of this family, a <inline-formula> <tex-math notation="LaTeX">$k$</tex-math><alternatives><inline-graphic xlink:href="ghasemiesfeh-ieq6-2718024.gif"/> </alternatives></inline-formula>-complex contagion covers the entire network of <inline-formula> <tex-math notation="LaTeX">$n$</tex-math><alternatives><inline-graphic xlink:href="ghasemiesfeh-ieq7-2718024.gif"/> </alternatives></inline-formula> nodes in <inline-formula><tex-math notation="LaTeX">$O(\log n)$</tex-math> <alternatives><inline-graphic xlink:href="ghasemiesfeh-ieq8-2718024.gif"/></alternatives></inline-formula> rounds with high probability (w.h.p). We prove that the choice of the seeds is crucial: in the PA model, even if a much larger number of seeds are chosen <italic>uniformly randomly</italic>, the contagion stops prematurely w.h.p. Although the earliest nodes in a PA model are likely to have high degrees, it is actually the evolutionary graph structure of such models that facilitates fast spreading of complex contagions. The general family of time-evolving graphs with this property even contains networks without a power law degree distribution. Finally, we prove that when a <inline-formula> <tex-math notation="LaTeX">$k$</tex-math><alternatives><inline-graphic xlink:href="ghasemiesfeh-ieq9-2718024.gif"/> </alternatives></inline-formula>-complex contagion starts from an arbitrary set of initial seeds on a general graph, determining if the number of infected vertices is above a given threshold is <inline-formula> <tex-math notation="LaTeX">${\mathbf {P}}$</tex-math><alternatives> <inline-graphic xlink:href="ghasemiesfeh-ieq10-2718024.gif"/></alternatives></inline-formula>-complete. Thus, one cannot hope to categorize all the settings in which <inline-formula><tex-math notation="LaTeX">$k$</tex-math> <alternatives><inline-graphic xlink:href="ghasemiesfeh-ieq11-2718024.gif"/></alternatives></inline-formula>-complex contagions percolate in a graph.12/04/2017 2:13 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2718024Hyperbolic Embedding for Efficient Computation of Path Centralities and Adaptive Routing in Large-Scale Complex Commodity Networks
https://www.computer.org/csdl/trans/tn/2017/03/07891008-abs.html
Computing the most central nodes in large-scale commodity networks is rather important for improving routing and associated applications. In this paper, we introduce a novel framework for the analysis and efficient computation of routing path-based centrality measures, focusing on betweenness and traffic load centrality. The proposed framework enables efficient approximation and in special cases accurate computation of the aforementioned measures in large-scale complex networks, as well as improving/adapting commodity (traffic) routing by identifying and alleviating key congestion points. It capitalizes on network embedding in hyperbolic space and exploits properties of greedy routing over hyperbolic coordinates. We show the computational benefits and approximation precision of our approach by comparing it with state-of-the-art path centrality computation techniques. We demonstrate its applicability on real topologies, characteristic of actual large-scale commodity networks, e.g., data, utility networks. Focusing on two graph embedding types, Rigel and greedy, we compare their impact on the performance of our framework. Then, we exemplify and statistically analyze the dynamic routing adaptation, via the variation of the minimum-depth spanning tree employed for greedy embedding in hyperbolic space. Notably, this allows for efficient routing adaptation according to a simple, distributed computation that can be applied during network operation to alleviate arising bottlenecks.08/31/2017 4:06 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2690258Inference of Hidden Social Power Through Opinion Formation in Complex Networks
https://www.computer.org/csdl/trans/tn/2017/03/07892936-abs.html
Social networks analysis and mining gets ever-increasing importance in various disciplines. In this context finding the most influential nodes with the highest social power is important in many applications including spreading of innovation, opinion formation, immunization, information propagation and recommendation. In this manuscript, we propose a mathematical framework in order to effectively estimate the social power (influence) of nodes from time series of their interactions. We assume that there is a connection network on which the nodes interact and exchange their opinions. The time series of the opinion values (with hidden social power values) are taken as input to the proposed formalism and an optimization approach results the estimated for the social power values. We propose an estimation framework based on Maximum-a-Posteriori method that can be converted to a convex optimization problem using Jensen inequality. We apply the proposed method on a number of model networks and show that it correctly estimates the true values of the social power. The proposed method is not sensitive to the specific form of social power used to produce the time series of the opinion values. We also consider an application of finding influential nodes in opinion formation through informed agents. In this application, the problem is to find a number of influential nodes to which the informed agents should be connected to maximize their influence. Our numerical simulations show that the proposed method outperforms classical heuristic methods including connecting the informed agents to nodes with the highest degree, betweenness, closeness, PageRank centralities or based on a state-of-the-art opinion-based model.08/31/2017 4:06 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2691405On Detection and Structural Reconstruction of Small-World Random Networks
https://www.computer.org/csdl/trans/tn/2017/03/07924316-abs.html
In this paper, we study detection and fast reconstruction of the celebrated Watts-Strogatz (WS) small-world random graph model <xref ref-type="bibr" rid="ref29">[29]</xref> which aims to describe real-world complex networks that exhibit both high clustering and short average length properties. The WS model with neighborhood size <inline-formula> <tex-math notation="LaTeX">$k$</tex-math></inline-formula> and rewiring probability probability <inline-formula> <tex-math notation="LaTeX">$\beta$</tex-math><alternatives><inline-graphic xlink:href="liang-ieq2-2703102.gif"/> </alternatives></inline-formula> can be viewed as a continuous interpolation between a deterministic ring lattice graph and the Erdős-Rényi random graph. We study the computational and statistical aspects of detection and recovery of the deterministic ring lattice structure (strong ties) in the presence of random connections (weak ties). The phase diagram in terms of <inline-formula><tex-math notation="LaTeX">$(k,\beta)$</tex-math><alternatives> <inline-graphic xlink:href="liang-ieq3-2703102.gif"/></alternatives></inline-formula> is shown to consist of several regions according to the difficulty of the problem. We propose distinct methods for these regions.08/31/2017 4:06 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2703102The Power of Quasi-Shortest Paths: $\rho$ -Geodesic Betweenness Centrality
https://www.computer.org/csdl/trans/tn/2017/03/07934353-abs.html
Betweenness centrality metrics usually underestimate the importance of nodes that are close to shortest paths but do not exactly fall on them. In this paper, we reevaluate the importance of such nodes and propose the <italic> <inline-formula><tex-math notation="LaTeX">$\rho$</tex-math><alternatives> <inline-graphic xlink:href="varelademedeiros-ieq2-2708705.gif"/></alternatives></inline-formula>-geodesic betweenness centrality</italic>, a novel metric that assigns weights to paths (and, consequently, to nodes on these paths) according to how close they are to shortest paths. The paths slightly longer than the shortest one are defined as <italic>quasi</italic>-shortest paths, and they are able to increase or to decrease the importance of a node according to how often the node falls on them. We compare the proposed metric with the traditional, distance-scaled, and random walk betweenness centralities using four network datasets with distinct characteristics. The results show that the proposed metric, besides better assessing the topological role of a node, is also able to maintain the rank position of nodes overtime compared to the other metrics; this means that network dynamics affect less our metric than others. Such property could help to avoid, for instance, the waste of resources caused when data follows only the shortest paths in dynamic networks, also reducing associated costs.08/31/2017 4:06 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2708705SIS Epidemic Spreading with Heterogeneous Infection Rates
https://www.computer.org/csdl/trans/tn/2017/03/07937925-abs.html
In this work, we aim to understand the influence of the heterogeneity of infection rates on the Susceptible-Infected-Susceptible (SIS) epidemic spreading. Employing the classic SIS model as the benchmark, we study the influence of the independently identically distributed infection rates on the average fraction of infected nodes in the metastable state. The log-normal, gamma and a newly designed distributions are considered for infection rates. We find that, when the recovery rate is small, i.e., the epidemic spreads out in both homogeneous and heterogeneous cases: 1) the heterogeneity of infection rates on average retards the virus spreading, and 2) a larger even-order moment of the infection rates leads to a smaller average fraction of infected nodes, but the odd-order moments contribute in the opposite way; when the recovery rate is large, i.e., the epidemic may die out or infect a small fraction of the population, the heterogeneity of infection rates may enhance the probability that the epidemic spreads out. Finally, we verify our conclusions via real-world networks with their heterogeneous infection rates. Our results suggest that, in reality the epidemic spread may not be so severe as the classic SIS model indicates, but to eliminate the epidemic is probably more difficult.08/31/2017 4:06 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2709786Information Cascades in Feed-Based Networks of Users with Limited Attention
https://www.computer.org/csdl/trans/tn/2017/02/07738584-abs.html
We build a model of information cascades on feed-based networks, taking into account the finite attention span of users, message generation rates and message forwarding rates. Through simulation of this model, we study the effect of the extent of user attention on the probability that the cascade becomes viral. In analogy with a branching process, we estimate the branching factor associated with the cascade process for different attention spans and different forwarding probabilities, and demonstrate that beyond a certain attention span, cascades tend to become viral. The critical forwarding probabilities have an inverse relationship with the attention span. Next, we develop an analytical and numerical approach that allows us to determine the branching factor for given values of message generation rates, message forwarding rates and attention spans. The branching factors obtained using this analytical approach show good agreement with those obtained through simulations. Finally, we analyze an event-specific dataset obtained from Twitter, and show that estimated branching factors correlate well with the cascade-size distributions associated with distinct hashtags.06/02/2017 2:03 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2625807Competitive Propagation: Models, Asymptotic Behavior and Quality-Seeding Games
https://www.computer.org/csdl/trans/tn/2017/02/07812795-abs.html
In this paper we propose a class of propagation models for multiple competing products over a social network. We consider two propagation mechanisms: social conversion and self conversion, corresponding, respectively, to endogenous and exogenous factors. A novel concept, the product-conversion graph, is proposed to characterize the interplay among competing products. According to the chronological order of social and self conversions, we develop two Markov-chain models and, based on the independence approximation, we approximate them with two corresponding difference equations systems. Our theoretical analysis on these two approximated models reveals the dependency of their asymptotic behavior on the structures of both the product-conversion graph and the social network, as well as the initial condition. In addition to the theoretical work, we investigate via numerical analysis the accuracy of the independence approximation and the asymptotic behavior of the Markov-chain model, for the case where social conversion occurs before self conversion. Finally, we propose two classes of games based on the competitive propagation model: the one-shot game and the dynamic infinite-horizon game. We characterize the quality-seeding trade-off for the first game and the Nash equilibrium in both games.06/02/2017 2:03 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2651070A Test of Hypotheses for Random Graph Distributions Built from EEG Data
https://www.computer.org/csdl/trans/tn/2017/02/07862892-abs.html
The theory of random graphs has been applied in recent years to model neural interactions in the brain. While the probabilistic properties of random graphs has been extensively studied, the development of statistical inference methods for this class of objects has received less attention. In this work we propose a non-parametric test of hypotheses to test if a sample of random graphs was generated by a given probability distribution (one-sample test) or if two samples of random graphs were originated from the same probability distribution (two-sample test). We prove a Central Limit Theorem providing the asymptotic distribution of the test statistics and we propose a method to compute the quantiles of the finite sample distributions by simulation. The test makes no assumption on the specific form of the distributions and it is consistent against any alternative hypotheses that differs from the sample distribution on at least one edge-marginal. Moreover, we show that the test is a Kolmogorov-Smirnov type test, for a given distance between graphs, and we study its performance on simulated data. We apply it to compare graphs of brain functional network interactions built from electroencephalographic (EEG) data collected during the visualization of point light displays depicting human locomotion.06/02/2017 2:03 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2674026Diffusion of Innovation in Large Scale Graphs
https://www.computer.org/csdl/trans/tn/2017/02/07870696-abs.html
Will a new smartphone application diffuse deeply in the population or will it sink into oblivion soon? To predict this, we argue that common models of spread of innovations based on cascade dynamics or epidemics may not be fully adequate. In this paper, we model the spread of a new technological item in a population through a novel network dynamics where diffusion is based on the word-of-mouth and the persuasion strength increases the more the product is diffused. In this paper we carry on an analysis on large scale graphs to show off how the parameters of the model, the topology of the graph and, possibly, the initial diffusion of the item, determine whether the spread of the item is successful or not. The paper is completed by a number of numerical simulations corroborating the analytical results.06/05/2017 2:03 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2678202Fast Generation of Spatially Embedded Random Networks
https://www.computer.org/csdl/trans/tn/2017/02/07879823-abs.html
Spatially Embedded Random Networks such as the Waxman random graph have been used in many settings for synthesizing networks. Prior to our work, there existed no software for generating these efficiently. Existing techniques are <inline-formula><tex-math notation="LaTeX">$O(n^2)$</tex-math><alternatives> <inline-graphic xlink:href="roughan-ieq1-2681700.gif"/></alternatives></inline-formula> where <inline-formula> <tex-math notation="LaTeX">$n$</tex-math><alternatives> <inline-graphic xlink:href="roughan-ieq2-2681700.gif"/></alternatives></inline-formula> is the number of nodes in the network; in this paper we present an <inline-formula><tex-math notation="LaTeX">$O(n + e)$</tex-math> <alternatives><inline-graphic xlink:href="roughan-ieq3-2681700.gif"/></alternatives></inline-formula> algorithm, where <inline-formula><tex-math notation="LaTeX">$e$</tex-math><alternatives> <inline-graphic xlink:href="roughan-ieq4-2681700.gif"/></alternatives></inline-formula> is the number of edges.06/02/2017 2:03 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2681700Load-Dependent Cascading Failures in Finite-Size Erdös-Rényi Random Networks
https://www.computer.org/csdl/trans/tn/2017/02/07883955-abs.html
Large-scale cascading failures can be triggered by very few initial failures, leading to severe damages in complex networks. This paper studies <italic>load-dependent</italic> cascading failures in random networks consisting of a large but finite number of components. Under a random single-node attack, a framework is developed to quantify the damage at each stage of a cascade. Estimations and analyses for the fraction of failed nodes are presented to evaluate the time-dependent system damage due to the attack. The results provide guidelines for choosing the load margin to avoid a cascade of failures. Furthermore, the analysis reveals a phase transition behavior in the extent of the damage as the load margin grows, i.e., the fraction of the damaged components drops from near one to near zero over a slight change in the load margin. The critical value of the load margin and the short interval over which such an abrupt change occurs are derived to characterize the network reaction to small network load variations. Our findings provide design principles for enhancing the network resiliency in load-dependent complex networks with practical sizes.06/02/2017 2:03 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2685582State of the Journal Editorial
https://www.computer.org/csdl/trans/tn/2017/02/07938527-abs.html
06/02/2017 2:03 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2701540Message from the Incoming Editor-in-Chief
https://www.computer.org/csdl/trans/tn/2017/02/07938530-abs.html
06/02/2017 2:03 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.2701561Distributed and Robust Fair Optimization Applied to Virus Diffusion Control
https://www.computer.org/csdl/trans/tn/2017/01/07580655-abs.html
This paper proposes three novel nonlinear, continuous-time, distributed algorithms to solve a class of fair resource allocation problems, which allow an interconnected group of operators to collectively minimize a global cost function subject to equality and inequality constraints. The proposed algorithms are designed to be robust so that temporary errors in communication or computation do not change their convergence to the equilibrium, and therefore, operators do not require global knowledge of the total resources in the network nor any specific initialization procedure. To analyze convergence of the algorithms, we use nonlinear analysis tools that exploit partial stability theory and nonsmooth Lyapunov analysis. We illustrate the applicability of our approach in connection to problems of virus spread minimization over computer and public networks. In simulation examples associated with virus spread minimization, we show that the virus elimination algorithms are asymptotically convergent and robust in the proposed sense.03/03/2017 5:59 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2614751On the Emergence of Shortest Paths by Reinforced Random Walks
https://www.computer.org/csdl/trans/tn/2017/01/07593357-abs.html
The co-evolution between network structure and functional performance is a fundamental and challenging problem whose complexity emerges from the intrinsic interdependent nature of structure and function. Within this context, we investigate the interplay between the efficiency of network navigation (i.e., path lengths) and network structure (i.e., edge weights). We propose a simple and tractable model based on iterative biased random walks where edge weights increase over time as function of the traversed path length. Under mild assumptions, we prove that biased random walks will eventually only traverse shortest paths in their journey towards the destination. We further characterize the transient regime proving that the probability to traverse non-shortest paths decays according to a power-law. We also highlight various properties in this dynamic, such as the trade-off between exploration and convergence, and preservation of initial network plasticity. We believe the proposed model and results can be of interest to various domains where biased random walks and de-centralized navigation have been applied.03/03/2017 6:00 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2618063Analysis of Centrality in Sublinear Preferential Attachment Trees via the Crump-Mode-Jagers Branching Process
https://www.computer.org/csdl/trans/tn/2017/01/07725526-abs.html
We investigate centrality and root-inference properties in a class of growing random graphs known as sublinear preferential attachment trees. We show that a continuous time branching processes called the Crump-Mode-Jagers (CMJ) branching process is well-suited to analyze such random trees, and prove that almost surely, a unique terminal tree centroid emerges, having the property that it becomes more central than any other fixed vertex in the limit of the random growth process. Our result generalizes and extends previous work establishing persistent centrality in uniform and linear preferential attachment trees. We also show that centrality may be utilized to generate a finite-sized <inline-formula><tex-math notation="LaTeX">$1-\epsilon$</tex-math><alternatives> <inline-graphic xlink:href="jog-ieq1-2622923.gif"/></alternatives></inline-formula> confidence set for the root node, for any <inline-formula><tex-math notation="LaTeX">$\epsilon > 0$</tex-math><alternatives> <inline-graphic xlink:href="jog-ieq2-2622923.gif"/></alternatives></inline-formula>, in a certain subclass of sublinear preferential attachment trees.03/03/2017 6:01 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2622923Confidence Sets for the Source of a Diffusion in Regular Trees
https://www.computer.org/csdl/trans/tn/2017/01/07740106-abs.html
We study the problem of identifying the source of a diffusion spreading over a regular tree. When the degree of each node is at least three, we show that it is possible to construct confidence sets for the diffusion source with size independent of the number of infected nodes. Our estimators are motivated by analogous results in the literature concerning identification of the root node in preferential attachment and uniform attachment trees. At the core of our proofs is a probabilistic analysis of Pólya urns corresponding to the number of uninfected neighbors in specific subtrees of the infection tree. We also provide an example illustrating the shortcomings of source estimation techniques in settings where the underlying graph is asymmetric.03/03/2017 5:59 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2627502Community Detection and Classification in Hierarchical Stochastic Blockmodels
https://www.computer.org/csdl/trans/tn/2017/01/07769223-abs.html
In disciplines as diverse as social network analysis and neuroscience, many large graphs are believed to be composed of loosely connected smaller graph primitives, whose structure is more amenable to analysis We propose a robust, scalable, integrated methodology for <italic>community detection</italic> and <italic>community comparison</italic> in graphs. In our procedure, we first embed a graph into an appropriate Euclidean space to obtain a low-dimensional representation, and then cluster the vertices into communities. We next employ nonparametric graph inference techniques to identify structural similarity among these communities. These two steps are then applied recursively on the communities, allowing us to detect more fine-grained structure. We describe a <italic>hierarchical stochastic blockmodel</italic>—namely, a stochastic blockmodel with a natural hierarchical structure—and establish conditions under which our algorithm yields consistent estimates of model parameters and <italic>motifs</italic>, which we define to be stochastically similar groups of subgraphs. Finally, we demonstrate the effectiveness of our algorithm in both simulated and real data. Specifically, we address the problem of locating similar sub-communities in a partially reconstructed <italic>Drosophila</italic> connectome and in the social network Friendster.03/03/2017 6:01 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.26343222016 Reviewers List*
https://www.computer.org/csdl/trans/tn/2017/01/07870780-abs.html
03/03/2017 5:59 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2017.26538982016 Index IEEE Transactions on Network Science and Engineering Vol. 3
https://www.computer.org/csdl/trans/tn/2017/01/07870802-abs.html
03/03/2017 5:59 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2638779The Smallest Eigenvalue of the Generalized Laplacian Matrix, with Application to Network-Decentralized Estimation for Homogeneous Systems
https://www.computer.org/csdl/trans/tn/2016/04/07542498-abs.html
The problem of synthesizing network-decentralized observers arises when several agents, corresponding to the nodes of a network, exchange information about local measurements to asymptotically estimate their own state. The network topology is unknown to the nodes, which can rely on information about their neighboring nodes only. For homogeneous systems, composed of identical agents, we show that a network-decentralized observer can be designed by starting from local observers (typically, optimal filters) and then adapting the gain to ensure overall stability. The smallest eigenvalue of the so-called generalized Laplacian matrix is crucial: stability is guaranteed if the gain is greater than the inverse of this eigenvalue, which is strictly positive if the graph is externally connected. To deal with uncertain topologies, we characterize the worst-case smallest eigenvalue of the generalized Laplacian matrix for externally connected graphs, and we prove that the worst-case graph is a chain. This general result provides a bound for the observer gain that ensures robustness of the network-decentralized observer even under arbitrary, possibly switching, configurations, and in the presence of noise.12/01/2016 2:03 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2600026The Coevolution of Appraisal and Influence Networks Leads to Structural Balance
https://www.computer.org/csdl/trans/tn/2016/04/07542503-abs.html
In sociology, an appraisal structure, represented by a signed matrix or a signed network, describes an evaluative cognitive configuration among individuals. In this article we argue that interpersonal influences originate from positive interpersonal appraisals and, in turn, adjust individuals’ appraisals of others. This mechanism amounts to a coevolution process of interpersonal appraisals and influences. We provide a mathematical formulation of the coevolutionary dynamics, characterize the invariant appraisal structures, and establish the convergence properties for all possible initial appraisals. Moreover, we characterize the implications of our model to the study of signed social networks. Specifically, our model predicts the convergence of the interpersonal appraisal network to a structure composed of multiple factions with multiple followers. A faction is a group of individuals with positive-complete interpersonal appraisals among them. We discuss how this factions-with-followers is a balanced structure with respect to an appropriate generalized model of balance theory.12/01/2016 2:03 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2600058Information Propagation in Clustered Multilayer Networks
https://www.computer.org/csdl/trans/tn/2016/04/07542505-abs.html
In today's world, individuals interact with each other in more complicated patterns than ever. Some individuals engage through online social networks (e.g., Facebook, Twitter), while some communicate only through conventional ways (e.g., face-to-face). Therefore, understanding the dynamics of information propagation among humans calls for a multi-layer network model where an online social network is conjoined with a physical network. In this work, we initiate a study of information diffusion in a <italic>clustered</italic> multi-layer network model, where all constituent layers are random networks with high clustering. We assume that information propagates according to the SIR model and with different information transmissibility across the networks. We give results for the conditions, probability, and size of information epidemics, i.e., cases where information starts from a single individual and reaches a positive fraction of the population. We show that increasing the level of clustering in either one of the layers increases the epidemic threshold and decreases the final epidemic size in the whole system. An interesting finding is that information with low transmissibility spreads more effectively with a small but densely connected social network, whereas highly transmissible information spreads better with the help of a large but loosely connected social network.12/15/2016 3:00 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2600059Suppressing Epidemics in Networks Using Priority Planning
https://www.computer.org/csdl/trans/tn/2016/04/07542525-abs.html
In this paper, we analyze a large class of dynamic resource allocation (DRA) strategies, named <italic>priority planning</italic>, that aim to suppress SIS epidemics taking place in a network. This is performed by distributing treatments of limited efficiency to its infected nodes, according to a <italic>priority-order</italic> precomputed offline. Under this perspective, an efficient DRA strategy for a given network can be designed by learning a proper <italic>linear arrangement</italic> of its nodes. In our theoretical analysis, we derive upper and lower bounds for the extinction time of the diffusion process that reveal the role of the <italic>maxcut</italic> of the considered linear arrangement. Accordingly, we highlight that the <italic>cutwidth</italic>, which is the minimum maxcut of all possible linear arrangements for a network, is a fundamental network property that determines the resource budget required to suppress the epidemic under priority planning. Finally, by making direct use of our theoretical results, we propose a novel and efficient DRA strategy, called <italic>maxcut minimization</italic> (MCM), which outperforms other competing strategies in our simulations, while offering desirable robustness under various noise profiles.12/15/2016 3:00 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2600029On Propagation of Phenomena in Interdependent Networks
https://www.computer.org/csdl/trans/tn/2016/04/07542532-abs.html
When multiple networks are interconnected because of mutual service interdependence, propagation of phenomena across the networks is likely to occur. Depending on the type of networks and phenomenon, the propagation may be a desired effect, such as the spread of information or consensus in a social network, or an unwanted one, such as the propagation of a virus or a cascade of failures in a communication or service network. In this paper, we propose a general analytic model that captures multiple types of dependency and of interaction among nodes of interdependent networks, that may cause the propagation of phenomena. The above model is used to evaluate the effects of different diffusion models in a wide range of network topologies, including different models of random graphs and real networks. We propose a new centrality metric and compare it to more traditional approaches to assess the impact of individual network nodes in the propagation. We propose guidelines to design networks in which the diffusion is either a desired phenomenon or an unwanted one, and consequently must be fostered or prevented, respectively. We performed extensive simulations to extend our study to large networks and to show the benefits of the proposed design solutions.12/01/2016 2:02 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2600033Reciprocity and Efficiency in Peer Exchange of Wireless Nodes Through Convex Optimization
https://www.computer.org/csdl/trans/tn/2016/04/07542597-abs.html
This paper considers the allocation of exchange rates in a network of wireless nodes which engage in peer-to-peer dissemination. Here, in addition to the desirable throughput efficiency, it is important to ensure a level of rate reciprocity between peers, an issue that has been studied before only for wired networks. For the wireless substrate efficiency and reciprocity may be in conflict, due to the non-uniform link capacities of different peering choices, and the possible interference between them. We use convex optimization to formulate a relevant tradeoff, measuring reciprocity through a Kullback-Leibler divergence between sent and received rates. We propose decentralized methods which involve peer-to-peer interactions, and are shown to converge to the corresponding tradeoff point. Illustrative simulations are provided.12/01/2016 2:03 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2600036Phase-Coupled Oscillators with Plastic Coupling: Synchronization and Stability
https://www.computer.org/csdl/trans/tn/2016/04/07558209-abs.html
In this article we study synchronization of systems of homogeneous phase-coupled oscillators with plastic coupling strengths and arbitrary underlying topology. The dynamics of the coupling strength between two oscillators is governed by the phase difference between these oscillators. We show that, under mild assumptions, such systems are gradient systems, and always achieve frequency synchronization. Furthermore, we provide sufficient stability and instability conditions that are based on results from algebraic graph theory. For a special case when underlying topology is a tree, we formulate a criterion (necessary and sufficient condition) of stability of equilibria. For both, tree and arbitrary topologies, we provide sufficient conditions for phase-locking, i.e., convergence to a stable equilibrium almost surely. We additionally find conditions when the system possesses a unique stable equilibrium, and thus, almost global stability follows. Several examples are used to demonstrate variety of equilibria the system has, their dependence on system's parameters, and to illustrate differences in behavior of systems with constant and plastic coupling strengths.12/01/2016 2:02 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2605096Cascading Failures in Load-Dependent Finite-Size Random Geometric Networks
https://www.computer.org/csdl/trans/tn/2016/04/07564402-abs.html
The problem of cascading failures in cyber-physical systems is drawing much attention in lieu of different network models for a diverse range of applications. While many analytic results have been reported for the case of large networks, very few of them are readily applicable to finite-size networks. This paper studies cascading failures in load-dependent finite-size geometric networks where the number of nodes is on the order of hundreds as in many real-life networks. In such networks, every node carries a certain amount of load in normal conditions, and maintains a <italic>load margin</italic> that enables handling a higher load up to a limit, if necessary. We investigate the impact of design parameters such as load margin, node density, and connectivity radius on network reaction to initial disturbances of different sizes. We quantify the damage imposed on the network, and derive lower and upper bounds on the size of this damage. Our finite-size analysis reveals the decisiveness and criticality of taking action within the first few stages of failure propagation in preventing a cascade. Furthermore, studying the trend of the bounds as the number of nodes increases indicates a <italic>phase transition</italic> behavior in the size of the cascade with respect to the load margin. We derive the critical value of the load margin at which such a transition occurs. The findings of this paper, in particular, shed light on how to choose the load margin appropriately such that a cascade of failures could be avoided.12/01/2016 2:03 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2608341Information Diffusion in Social Networks in Two Phases
https://www.computer.org/csdl/trans/tn/2016/04/07570252-abs.html
The problem of maximizing information diffusion, given a certain budget expressed in terms of the number of seed nodes, is an important topic in social networks research. Existing literature focuses on single phase diffusion where all seed nodes are selected at the beginning of diffusion and all the selected nodes are activated simultaneously. This paper undertakes a detailed investigation of the effect of selecting and activating seed nodes in multiple phases. Specifically, we study diffusion in two phases assuming the well-studied independent cascade model. First, we formulate an objective function for two-phase diffusion, investigate its properties, and propose efficient algorithms for finding seed nodes in the two phases. Next, we study two associated problems: (1) <italic>budget splitting</italic> which seeks to optimally split the total budget between the two phases and (2) <italic>scheduling</italic> which seeks to determine an optimal delay after which to commence the second phase. Our main conclusions include: (a) under strict temporal constraints, use single phase diffusion, (b) under moderate temporal constraints, use two-phase diffusion with a short delay while allocating most of the budget to the first phase, and (c) when there are no temporal constraints, use two-phase diffusion with a long delay while allocating roughly one-third of the budget to the first phase.12/01/2016 2:02 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2610838The Minimum Information Dominating Set for Opinion Sampling in Social Networks
https://www.computer.org/csdl/trans/tn/2016/04/07763903-abs.html
We consider the problem of sampling a node-valued graph. The objective is to infer the values of all nodes from that of a minimum subset of nodes by exploiting correlations in node values. We first introduce the concept of <italic> information dominating set</italic> (IDS). A subset of nodes in a given graph is an IDS if the values of these nodes are sufficient to infer the values of all nodes. We focus on two fundamental algorithmic problems: (i) how to determine whether a given subset of nodes is an IDS; (ii) how to construct a minimum IDS. Assuming binary node values and the local majority rule for information correlation, we first show that in acyclic graphs, both problems admit linear-complexity solutions by establishing a connection between the IDS problems and the vertex cover problem. We then show that in a general graph, the first problem is co-NP-complete and the second problem is NP-hard. We develop two approaches to solve the IDS problems: one reduces the problems to a hitting set problem based on the concept of <italic>essential difference set</italic>, the other a gradient-based approach with a tunable parameter that trades off performance with time complexity. The concept of IDS finds applications in opinion sampling such as political polling and market survey, identifying critical nodes in information networks, and inferring epidemics and cascading failures in communication and infrastructure networks.12/01/2016 2:03 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2600030Efficient Multistate Route Computation
https://www.computer.org/csdl/trans/tn/2016/03/07502077-abs.html
A multi-state graph is defined as a graph where each edge is associated with a stochastic, rather than fixed, value. For cases where the stochastic value is discrete and represents a distance or time, an algorithm is presented which efficiently finds the single-source shortest paths and distances for all combinations of edge values. Infinite and negative weight values are allowed as long as there are no negative weight cycles. Efficiency is achieved by both leveraging dynamic programming and only finding solutions for the set of dominant states that correctly cover any possible edge metric setting for the graph. Although the method has exponential complexity in the worst case, some samples and example graphs are used to illustrate the savings over a brute-force combinatoric approach. Furthermore by leveraging some real-world assumptions, a pseudo-polynomial version is presented that can be applied to larger scale problems. The results of the method are then shown to enable the computation of <italic>most-likely shortest paths </italic>.08/31/2016 2:02 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2586750Effective Network Quarantine with Minimal Restrictions on Communication Activities
https://www.computer.org/csdl/trans/tn/2016/03/07502128-abs.html
This paper studies a network structure design problem constrained by the epidemic outbreaks. In our model, geographic locations (nodes) and their connections (edges) are modeled as a ring graph. The movement of a person is represented as a flow from one location to another. A person can be infected at a location (node), depending on the number of infected flows (persons) going through that location. In our paper, diseases are not limited to real human diseases; they can also refer to the general epidemic information propagations. Given desired interaction traffic from a node to other nodes in terms of flows, and a greedy shortest path routing scheme that is analogous to the greedy coin change, we focus on the structure design (representing quarantine rules) that determines the number and distribution of chords on the virtual ring network for remote connections. Our objective is to minimize the average number of routing hops, while the epidemic outbreaks are controlled under given infection and recovery rates. We provide a systematic isomorphic structure design on nine different cases, based on three traffic distribution and three infection rate models. Two hypercube-based structures are proposed. We also provide a greedy solution for constructing polymorphic structures. Our study reveals some intriguing theoretical results, validated through experiments, on tradeoffs between local and global infections. Our work casts new light on the effective network quarantine that places minimal restrictions on connections, i.e., maximal preservation of normal communication activities, while controlling epidemic outbreaks.08/31/2016 2:02 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2586751Analyses of the Clustering Coefficient and the Pearson Degree Correlation Coefficient of Chung's Duplication Model
https://www.computer.org/csdl/trans/tn/2016/03/07502180-abs.html
Recent advances in gene expression profiling and proteomics techniques have spawn considerable interest in duplication models for modelling the evolution and growth of biological networks. In this paper, we consider the duplication model studied by Chung et al. It seems (to the best of our knowledge) that both the clustering coefficient and the Pearson degree correlation coefficient of this model have not been studied <italic>analytically </italic>. For such a model, we study the degree of a randomly selected vertex and derive first-order differential equations for its mean, second moment, and third moment. We also study the degrees of the two vertices that appear at both ends of a randomly selected edge and derive an approximation for the expected product of the degrees of these two vertices. Using these results, we obtain closed-form approximations for the clustering coefficient and the Pearson degree correlation coefficient of the duplication model. For the clustering coefficient, numerical results calculated from our approximation and the corresponding simulation results agree extremely well for the whole evolution process. For the Pearson degree correlation coefficient, there is some discrepancy at early times between the simulation results and the numerical results. However, as time goes on, the discrepancy diminishes. We present an asymptotic approximation by keeping only the dominant terms in the clustering coefficient and the degree correlation coefficient. Numerical study indicates that relative approximation error can decrease slowly with time, when the selection probability of the model is near some special values.08/31/2016 2:02 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2586848Analyzing Vulnerability of Power Systems with Continuous Optimization Formulations
https://www.computer.org/csdl/trans/tn/2016/03/07506129-abs.html
Potential vulnerabilities in a power grid can be exposed by identifying those transmission lines on which attacks (in the form of interference with their transmission capabilities) causes maximum disruption to the grid. In this study, we model the grid by (nonlinear) AC power flow equations, and assume that attacks take the form of increased impedance along transmission lines. We quantify disruption in several different ways, including (a) overall deviation of the voltages at the buses from <inline-formula><tex-math notation="LaTeX">$1.0$</tex-math><alternatives> <inline-graphic xlink:type="simple" xlink:href="kim-ieq1-2587484.gif"/></alternatives></inline-formula> per unit (p.u.), and (b) the minimal amount of load that must be shed in order to restore the grid to stable operation. We describe optimization formulations of the problem of finding the most disruptive attack, which are either nonlinear programing problems or nonlinear bilevel optimization problems, and describe customized algorithms for solving these problems. Experimental results on the IEEE 118-Bus system and a Polish 2383-Bus system are presented.09/03/2016 2:34 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2587484Design of Self-Organizing Networks: Creating Specified Degree Distributions
https://www.computer.org/csdl/trans/tn/2016/03/07508902-abs.html
A key problem in the study and design of complex systems is the apparent disconnection between the microscopic and the macroscopic. It is not straightforward to identify the local interactions that give rise to an observed global phenomenon, nor is it simple to design a system that will exhibit some desired global property using only local knowledge. Here we propose a methodology that allows for the identification of local interactions that give rise to a desired global property of a network, the degree distribution. Given a set of observable processes acting on a network, we determine the conditions that must be satisfied to generate a desired steady-state degree distribution. We thereby provide a simple example for a class of tasks where a system can be designed to self-organize to a given state.08/31/2016 2:02 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2586762Clustering Network Layers with the Strata Multilayer Stochastic Block Model
https://www.computer.org/csdl/trans/tn/2016/02/07442167-abs.html
Multilayer networks are a useful data structure for simultaneously capturing multiple types of relationships between a set of nodes. In such networks, each relational definition gives rise to a layer. While each layer provides its own set of information, community structure across layers can be collectively utilized to discover and quantify underlying relational patterns between nodes. To concisely extract information from a multilayer network, we propose to identify and combine sets of layers with meaningful similarities in community structure. In this paper, we describe the “strata multilayer stochastic block model” (sMLSBM), a probabilistic model for multilayer community structure. The central extension of the model is that there exist groups of layers, called “strata”, which are defined such that all layers in a given stratum have community structure described by a common stochastic block model (SBM). That is, layers in a stratum exhibit similar node-to-community assignments and SBM probability parameters. Fitting the sMLSBM to a multilayer network provides a joint clustering that yields node-to-community and layer-to-stratum assignments, which cooperatively aid one another during inference. We describe an algorithm for separating layers into their appropriate strata and an inference technique for estimating the SBM parameters for each stratum. We demonstrate our method using synthetic networks and a multilayer network inferred from data collected in the Human Microbiome Project.06/06/2016 2:00 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2537545A Network Model for Vehicular Ad Hoc Networks: An Introduction to Obligatory Attachment Rule
https://www.computer.org/csdl/trans/tn/2016/02/07468574-abs.html
In the past few years, the study of complex networks has attracted the attention of researchers. Many real networks, ranging from technological networks such as the Internet to biological networks, have been considered as special types of complex networks. Through application of the network science, important structural properties of such networks have been analyzed and the mechanisms that form such characteristics have been introduced. In this paper, we address the structural characteristics of a technological network called Vehicular Ad hoc Networks (VANETs). Recent studies reveal that the communication graph of VANETs has some interesting characteristics including: the Gaussian degree distributions, very high clustering coefficients, and the absence of the small-world property. These findings can introduce VANETs as a new network with unique properties and it may shed light on discovering new networks with similar characteristics. In this paper, we propose the <italic>obligatory attachment rule</italic> and we show that this attachment rule can be used to address the structural characteristics of the communication graph of VANETs in highway environments. The accuracy and applicability of the proposed analytical model during different time periods have been discussed through comparison with empirical data.06/06/2016 2:00 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2566616Enhancement of Synchronizability in Networks with Community Structure through Adding Efficient Inter-Community Links
https://www.computer.org/csdl/trans/tn/2016/02/07469390-abs.html
In this paper we propose a framework for enhancing synchronizability of networks with community structure through adding efficient inter-community links. Adding new inter-community links to a network with community structure usually improves its synchronizability. However, this is achieved by increasing communication cost in the network. Thus, the links should be designed in a way that adding them results in maximal improvement. Here, we first consider two disjoint communities, and then, propose an algorithm for choosing nodes from each community to create a certain number of inter-community links between them. We propose an efficient rewiring algorithm, which uses a criterion based on eigenvectors corresponding to the largest and second smallest eigenvalues of the Laplacian matrix. The algorithm uses a modified simulated annealing approach for the optimization process. Extensive numerical simulations show that the proposed algorithm can systematically improve the synchronizability of the network as defined by the eigenratio of the Laplacian. We also show that the proposed rewiring algorithm outperforms heuristic methods such as finding the best possible inter-community links between hub nodes, while having much better computational complexity as well. We study the properties of the end-nodes connected to the final optimized inter-community links. We find that in some cases these nodes have much less centrality values (e.g., degree and betweenness) than the hub nodes of the communities, which means that the optimization algorithm finds non-hub nodes to create inter-community connections in between. The optimized networks are also shown to be good structures for phase synchronizability of non-identical oscillators.06/07/2016 2:09 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2566615Stability of Spreading Processes over Time-Varying Large-Scale Networks
https://www.computer.org/csdl/trans/tn/2016/01/07377119-abs.html
In this paper, we analyze the dynamics of spreading processes taking place over time-varying networks. A common approach to model time-varying networks is via Markovian random graph processes. This modeling approach presents the following limitation: Markovian random graphs can only replicate switching patterns with exponential inter-switching times, while in real applications these times are usually far from exponential. In this paper, we introduce a flexible and tractable extended family of processes able to replicate, with arbitrary accuracy, any distribution of inter-switching times. We then study the stability of spreading processes in this extended family. We first show that a direct analysis based on Itô's formula provides stability conditions in terms of the eigenvalues of a matrix whose size grows exponentially with the number of edges. To overcome this limitation, we derive alternative stability conditions involving the eigenvalues of a matrix whose size grows linearly with the number of nodes. Based on our results, we also show that heuristics based on aggregated static networks approximate the epidemic threshold more accurately as the number of nodes grows, or the temporal volatility of the random graph process is reduced. Finally, we illustrate our findings via numerical simulations.03/08/2016 3:59 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2516346A Mathematical Theory for Clustering in Metric Spaces
https://www.computer.org/csdl/trans/tn/2016/01/07378297-abs.html
Clustering is one of the most fundamental problems in data analysis and it has been studied extensively in the literature. Though many clustering algorithms have been proposed, clustering theories that justify the use of these clustering algorithms are still unsatisfactory. In particular, one of the fundamental challenges is to address the following question: What is a cluster in a set of data points? In this paper, we make an attempt to address such a question by considering a set of data points associated with a distance measure (metric). We first propose a new <italic>cohesion</italic> measure in terms of the distance measure. Using the cohesion measure, we define a cluster as a set of points that are cohesive to themselves. For such a definition, we show there are various equivalent statements that have intuitive explanations. We then consider the second question: How do we find clusters and good partitions of clusters under such a definition? For such a question, we propose a hierarchical agglomerative algorithm and a partitional algorithm. Unlike standard hierarchical agglomerative algorithms, our hierarchical agglomerative algorithm has a specific stopping criterion and it stops with a partition of clusters. Our partitional algorithm, called the <inline-formula><tex-math notation="LaTeX">$K$</tex-math><alternatives> <inline-graphic xlink:type="simple" xlink:href="chang-ieq1-2516339.gif"/></alternatives></inline-formula>-sets algorithm in the paper, appears to be a new iterative algorithm. Unlike the Lloyd iteration that needs two-step minimization, our <inline-formula><tex-math notation="LaTeX">$K$</tex-math><alternatives> <inline-graphic xlink:type="simple" xlink:href="chang-ieq2-2516339.gif"/></alternatives></inline-formula>-sets algorithm only takes one-step minimization. One of the most interesting findings of our paper is the duality result between a distance measure and a cohesion measure. Such a duality result leads to a dual <inline-formula><tex-math notation="LaTeX">$K$ </tex-math><alternatives><inline-graphic xlink:type="simple" xlink:href="chang-ieq3-2516339.gif"/></alternatives></inline-formula>-sets algorithm for clustering a set of data points with a cohesion measure. The dual <inline-formula> <tex-math notation="LaTeX">$K$</tex-math><alternatives><inline-graphic xlink:type="simple" xlink:href="chang-ieq4-2516339.gif"/> </alternatives></inline-formula>-sets algorithm converges in the same way as a sequential version of the classical kernel <inline-formula><tex-math notation="LaTeX">$K$</tex-math><alternatives> <inline-graphic xlink:type="simple" xlink:href="chang-ieq5-2516339.gif"/></alternatives></inline-formula>-means algorithm. The key difference is that a cohesion measure does not need to be positive semi-definite.03/08/2016 3:59 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.25163392015 Index IEEE Transactions on Network Science and Engineering Vol. 2
https://www.computer.org/csdl/trans/tn/2016/01/07387808-abs.html
01/28/2016 4:39 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2015.2514185Sync-Rank: Robust Ranking, Constrained Ranking and Rank Aggregation via Eigenvector and SDP Synchronization
https://www.computer.org/csdl/trans/tn/2016/01/07395367-abs.html
We consider the classical problem of establishing a statistical ranking of a set of <inline-formula> <tex-math notation="LaTeX">$n$</tex-math><alternatives><inline-graphic xlink:type="simple" xlink:href="cucuringu-ieq1-2523761.gif"/> </alternatives></inline-formula> items given a set of inconsistent and incomplete pairwise comparisons between such items. Instantiations of this problem occur in numerous applications in data analysis (e.g., ranking teams in sports data), computer vision, and machine learning. We formulate the above problem of ranking with incomplete noisy information as an instance of the <italic>group synchronization</italic> problem over the group SO(2) of planar rotations, whose usefulness has been demonstrated in numerous applications in recent years in computer vision and graphics, sensor network localization and structural biology. Its least squares solution can be approximated by either a spectral or a semidefinite programming (SDP) relaxation, followed by a rounding procedure. We perform extensive numerical simulations on both synthetic and real-world data sets, which show that our proposed method compares favorably to other ranking methods from the recent literature. Existing theoretical guarantees on the group synchronization problem imply lower bounds on the largest amount of noise permissible in the data while still achieving an approximate recovery of the ground truth ranking. We propose a similar synchronization-based algorithm for the rank-aggregation problem, which integrates in a globally consistent ranking many pairwise rank-offsets or partial rankings, given by different rating systems on the same set of items, an approach which yields significantly more accurate results than other aggregation methods, including <italic>Rank-Centrality</italic>, a recent state-of-the-art algorithm. Furthermore, we discuss the problem of semi-supervised ranking when there is available information on the ground truth rank of a subset of players, and propose an algorithm based on SDP which is able to recover the ranking of the remaining players, subject to such hard constraints. Finally, synchronization-based ranking, combined with a spectral technique for the densest subgraph problem, makes it possible to extract partial rankings that other methods are not able to find, in other words, to identify the rank of a small subset of players whose pairwise rank comparisons are less noisy than the rest of the data. We discuss a number of related open questions which we defer for future investigation.03/08/2016 3:59 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2523761Detecting Multiple Information Sources in Networks under the SIR Model
https://www.computer.org/csdl/trans/tn/2016/01/07395374-abs.html
In this paper, we study the problem of detecting multiple information sources in networks under the Susceptible-Infected-Recovered (SIR) model. First, assuming the number of information sources is known, we develop a sample-path-based algorithm, named clustering and localization, for trees. For <inline-formula> <tex-math notation="LaTeX">$g$</tex-math><alternatives><inline-graphic xlink:type="simple" xlink:href="chen-ieq1-2523804.gif"/> </alternatives></inline-formula>-regular trees, the estimators produced by the proposed algorithm are within a constant distance from the real sources with a high probability. We further present a heuristic algorithm for general networks and an algorithm for estimating the number of sources when the number of real sources is unknown.03/08/2016 3:59 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2523804On the Interplay Between Individuals’ Evolving Interaction Patterns and Traits in Dynamic Multiplex Social Networks
https://www.computer.org/csdl/trans/tn/2016/01/07395376-abs.html
The interplay between individuals’ social interactions and traits has been studied extensively but traditionally from static or homogeneous social network perspectives. The recent availability of dynamic and heterogeneous (multiplex) network data has introduced a variety of new challenges. Critically, novel computational models are needed that can cope with data dynamics and heterogeneity. In this paper, we introduce a computational framework that is broadly applicable to a variety of dynamic, multiplex domains, which focuses on: 1) measuring changes in node interaction patterns with time, 2) clustering nodes with similar evolving patterns, and 3) linking the clusters with trait similarities. We apply the framework to study the interplay between evolving topology and traits in an 18-month social network dataset encompassing both digital communications and co-location instances. Notably, we demonstrate how our framework captures results that would otherwise be missed by a simpler approach such as static network analysis alone. In addition, we uncover network-trait interplays that have not been studied to date and could lead to novel insights by domain scientists.03/08/2016 3:59 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.25237982015 Reviewers List<formula formulatype="inline"><tex Notation="TeX">$^\ast$</tex></formula>
https://www.computer.org/csdl/trans/tn/2016/01/07425344-abs.html
03/08/2016 3:59 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2529101EIC Editorial
https://www.computer.org/csdl/trans/tn/2016/01/07426485-abs.html
03/08/2016 4:43 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2016.2533678Reconstruction in the Labelled Stochastic Block Model
https://www.computer.org/csdl/trans/tn/2015/04/07298446-abs.html
The labelled stochastic block model is a random graph model representing networks with community structure and interactions of multiple types. In its simplest form, it consists of two communities of approximately equal size, and the edges are drawn and labelledat random with probability depending on whether their two endpoints belong to the same community or not. It has been conjectured in <xref ref-type="bibr" rid="ref1">[1]</xref> that correlated reconstruction (i.e., identification of a partition correlated with the true partition into the underlying communities) would be feasible if and only if a model parameter exceeds a threshold. We prove one half of this conjecture, i.e., reconstruction is impossible when below thethreshold. In the positive direction, we introduce a weighted graph to exploit the label information. With a suitable choice of weight function, we show that when above the threshold by a specific constant, reconstruction is achieved by (1) minimum bisection, (2) a semidefinite relaxation of minimum bisection, and (3) a spectral method combined with removal of edges incident to vertices of high degree. Furthermore, we show that hypothesis testing between the labelled stochastic block model and the labelled Erdős-Rényi random graph model exhibits a phase transition at the conjectured reconstruction threshold.12/29/2015 12:48 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2015.2490580Data-Driven Network Resource Allocation for Controlling Spreading Processes
https://www.computer.org/csdl/trans/tn/2015/04/07327231-abs.html
We propose a mathematical framework, based on conic geometric programming, to control a susceptible-infected-susceptible viral spreading process taking place in a directed contact network with unknown contact rates. We assume that we have access to time series data describing the evolution of the spreading process observed by a collection of sensor nodes over a finite time interval. We propose a data-driven robust optimization framework to find the optimal allocation of protection resources (e.g., vaccines and/or antidotes) to eradicate the viral spread at the fastest possible rate. In contrast to current network identification heuristics, in which a single network is identified to explain the observed data, we use available data to define an uncertainty set containing all networks that are coherent with empirical observations. Through Lagrange duality and convexification of the uncertainty set, we are able to relax the robust optimization problem into a conic geometric program, recently proposed by Chandrasekaran and Shah <xref ref-type="bibr" rid="ref1">[1]</xref> , which allows us to efficiently find the optimal allocation of resources to control the worst-case spread that can take place in the uncertainty set of networks. We illustrate our approach in a transportation network from which we collect partial data about the dynamics of a hypothetical epidemic outbreak over a finite period of time.12/29/2015 12:48 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2015.2500158The Strategic Formation of Multi-Layer Networks
https://www.computer.org/csdl/trans/tn/2015/04/07328295-abs.html
We study the strategic formation of multi-layer networks, where each layer represents a different type of relationship between the nodes in the network and is designed to maximize some utility that depends on the topology of that layer and those of the other layers. We start by generalizing distance-based network formation to the two-layer setting, where edges are constructed in one layer (with fixed cost per edge) to minimize distances between nodes that are neighbors in another layer. We show that designing an optimal network in this setting is NP-hard. Despite the underlying complexity of the problem, we characterize certain properties of the optimal networks. We then formulate a multi-layer network formation game where each layer corresponds to a player that is optimally choosing its edge set in response to the edge sets of the other players. We consider utility functions that view the different layers as strategic substitutes. By applying our results about optimal networks, we show that players with low edge costs drive players with high edge costs out of the game, and that hub-and-spoke networks that are commonly observed in transportation systems arise as Nash equilibria in this game.12/29/2015 12:48 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2015.2500162Formation of Robust Multi-Agent Networks through Self-Organizing Random Regular Graphs
https://www.computer.org/csdl/trans/tn/2015/04/07337422-abs.html
Multi-agent networks are often modeled as interaction graphs, where the nodes represent the agents and the edges denote some direct interactions. The robustness of a multi-agent network to perturbations such as failures, noise, or malicious attacks largely depends on the corresponding graph. In many applications, networks are desired to have well-connected interaction graphs with relatively small number of links. One family of such graphs is the random regular graphs. In this paper, we present a decentralized scheme for transforming any connected interaction graph with a possibly non-integer average degree of <inline-formula><tex-math>$k$</tex-math><alternatives> <inline-graphic xlink:type="simple" xlink:href="yazicioglu-ieq1-2503983.gif"/></alternatives></inline-formula> into a connected random <inline-formula><tex-math>$m$</tex-math><alternatives><inline-graphic xlink:type="simple" xlink:href="yazicioglu-ieq2-2503983.gif"/> </alternatives></inline-formula>-regular graph for some <inline-formula><tex-math>$m\in [k, k+2]$</tex-math> <alternatives><inline-graphic xlink:type="simple" xlink:href="yazicioglu-ieq3-2503983.gif"/></alternatives></inline-formula>. Accordingly, the agents improve the robustness of the network while maintaining a similar number of links as the initial configuration by locally adding or removing some edges.12/29/2015 12:48 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2015.2503983Understanding Sequential User Behavior in Social Computing: To Answer or to Vote?
https://www.computer.org/csdl/trans/tn/2015/03/07210192-abs.html
Understanding how users participate is of key importance to social computing systems since their value is created from user contributions. In many social computing systems, users decide sequentially whether to participate or not and, if participate, whether to create a piece of content directly, i.e., answering, or to rate existing content, i.e., voting. Moreover, there exists an answering-voting externality as a user’s utility for answering depends on votes received in the future. We present in this paper a game-theoretic model that formulates the sequential decision making of strategic users under the presence of such an answering-voting externality. We prove theoretically the existence and uniqueness of a pure strategy equilibrium. To further understand the equilibrium participation of users, we show that there exist advantages for users with higher abilities and for answering earlier. Therefore, the equilibrium has a threshold structure and the threshold for answering gradually increases as answers accumulate. We further extend our results to a more general setting where users can choose endogenously their efforts for answering. To show the validness of our model, we analyze user behavior data collected from a popular Q&A site Stack Overflow and show that the main qualitative predictions of our model match up with observations made from the data. Finally, we formulate the system designer’s problem and abstract from numerical simulations several design principles that could potentially guide the design of incentive mechanisms for social computing systems in practice.10/13/2015 11:40 am PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2015.2470542Sparse Solutions to the Average Consensus Problem via Various Regularizations of the Fastest Mixing Markov-Chain Problem
https://www.computer.org/csdl/trans/tn/2015/03/07268908-abs.html
In the consensus problem on multi-agent systems, in which the states of the agents represent opinions, the agents aim at reaching a common opinion (or consensus state) through local exchange of information. An important design problem is to choose the degree of interconnection of the subsystems to achieve a good trade-off between a small number of interconnections and a fast convergence to the consensus state, which is the average of the initial opinions under mild conditions. This paper addresses this problem through <inline-formula><tex-math>$l_1$</tex-math> <alternatives><inline-graphic xlink:type="simple" xlink:href="gnecco-ieq1-2479086.gif"/></alternatives></inline-formula>-norm and <inline-formula><tex-math>$l_0$</tex-math><alternatives><inline-graphic xlink:type="simple" xlink:href="gnecco-ieq2-2479086.gif"/> </alternatives></inline-formula>-“pseudo-norm” regularized versions of the well-known Fastest Mixing Markov-Chain (FMMC) problem. We show that such versions can be interpreted as robust forms of the FMMC problem and provide results to guide the choice of the regularization parameter.10/13/2015 11:40 am PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2015.2479086Rigid Network Design Via Submodular Set Function Optimization
https://www.computer.org/csdl/trans/tn/2015/03/07272118-abs.html
We consider the problem of constructing networks that exhibit desirable algebraic rigidity properties, which can provide significant performance improvements for associated formation shape control and localization tasks. We show that the network design problem can be formulated as a submodular set function optimization problem and propose greedy algorithms that achieve global optimality or an established near-optimality guarantee. We also consider the separate but related problem of selecting anchors for sensor network localization to optimize a metric of the error in the localization solutions. We show that an interesting metric is a modular set function, which allows a globally optimal selection to be obtained using a simple greedy algorithm. The results are illustrated via numerical examples, and we show that the methods scale to problems well beyond the capabilities of current state-of-the-art convex relaxation techniques.10/13/2015 11:40 am PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2015.2480247Spreading Processes in Multilayer Networks
https://www.computer.org/csdl/trans/tn/2015/02/07093190-abs.html
Several systems can be modeled as sets of interconnected networks or networks with multiple types of connections, here generally called multilayer networks. Spreading processes such as information propagation among users of online social networks, or the diffusion of pathogens among individuals through their contact network, are fundamental phenomena occurring in these networks. However, while information diffusion in single networks has received considerable attention from various disciplines for over a decade, spreading processes in multilayer networks is still a young research area presenting many challenging research issues. In this paper, we review the main models, results and applications of multilayer spreading processes and discuss some promising research directions.07/01/2015 5:05 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2015.2425961Robustness of Large-Scale Stochastic Matrices to Localized Perturbations
https://www.computer.org/csdl/trans/tn/2015/02/07115154-abs.html
Many notions of network centrality can be formulated in terms of invariant probability vectors of suitably defined stochastic matrices encoding the network structure. Analogously, invariant probability vectors of stochastic matrices allow one to characterize the asymptotic behavior of many linear network dynamics, e.g., arising in opinion dynamics in social networks as well as in distributed averaging algorithms for estimation or control. Hence, a central problem in network science and engineering is that of assessing the robustness of such invariant probability vectors to perturbations possibly localized on some relatively small part of the network. In this work, upper bounds are derived on the total variation distance between the invariant probability vectors of two stochastic matrices differing on a subset <inline-formula><tex-math>$\mathcal {W}$</tex-math><alternatives> <inline-graphic xlink:type="simple" xlink:href="como-ieq1-2438818.gif"/></alternatives></inline-formula> of rows. Such bounds depend on three parameters: the mixing time and the entrance time on the set <inline-formula><tex-math> $\mathcal {W}$</tex-math><alternatives><inline-graphic xlink:type="simple" xlink:href="como-ieq2-2438818.gif"/></alternatives> </inline-formula> for the Markov chain associated to one of the matrices; and the exit probability from the set <inline-formula><tex-math>$\mathcal {W}$</tex-math><alternatives> <inline-graphic xlink:type="simple" xlink:href="como-ieq3-2438818.gif"/></alternatives></inline-formula> for the Markov chain associated to the other matrix. These results, obtained through coupling techniques, prove particularly useful in scenarios where <inline-formula><tex-math>$\mathcal {W}$</tex-math><alternatives> <inline-graphic xlink:type="simple" xlink:href="como-ieq4-2438818.gif"/></alternatives></inline-formula> is a small subset of the state space, even if the difference between the two matrices is not small in any norm. Several applications to large-scale network problems are discussed, including robustness of Google’s PageRank algorithm, distributed averaging, consensus algorithms, and the voter model.07/01/2015 5:05 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2015.2438818On the Influence of the Seed Graph in the Preferential Attachment Model
https://www.computer.org/csdl/trans/tn/2015/01/07024931-abs.html
We study the influence of the seed graph in the preferential attachment model, focusing on the case of trees. We first show that the seed has no effect from a weak local limit point of view. On the other hand, we conjecture that different seeds lead to different distributions of limiting trees from a total variation point of view. We take a first step in proving this conjecture by showing that seeds with different degree profiles lead to different limiting distributions for the (appropriately normalized) maximum degree, implying that such seeds lead to different (in total variation) limiting trees.04/20/2015 11:48 am PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2015.2397592Bi-Virus SIS Epidemics over Networks: Qualitative Analysis
https://www.computer.org/csdl/trans/tn/2015/01/07046373-abs.html
The paper studies the qualitative behavior of a set of ordinary differential equations (ODE) that models the dynamics of bi-virus epidemics over bilayer networks. Each layer is a weighted digraph associated with a strain of virus; the weights <inline-formula><tex-math notation="LaTeX">$\gamma ^{z}_{ij}$</tex-math><alternatives> <inline-graphic xlink:type="simple" xlink:href="moura-ieq1-2406252.gif"/></alternatives></inline-formula> represent the rates of infection from node <inline-formula><tex-math notation="LaTeX">$i$</tex-math><alternatives> <inline-graphic xlink:type="simple" xlink:href="moura-ieq2-2406252.gif"/></alternatives></inline-formula> to node <inline-formula> <tex-math notation="LaTeX">$j$</tex-math><alternatives><inline-graphic xlink:type="simple" xlink:href="moura-ieq3-2406252.gif"/> </alternatives></inline-formula> of strain <inline-formula><tex-math notation="LaTeX">$z$</tex-math><alternatives> <inline-graphic xlink:type="simple" xlink:href="moura-ieq4-2406252.gif"/></alternatives></inline-formula>. We establish a sufficient condition on the <inline-formula><tex-math notation="LaTeX">$\gamma$</tex-math><alternatives> <inline-graphic xlink:type="simple" xlink:href="moura-ieq5-2406252.gif"/></alternatives></inline-formula>’s that guarantees survival of the fittest—only one strain survives. We propose an ordering of the weighted digraphs, the <inline-formula> <tex-math notation="LaTeX">$\star$</tex-math><alternatives><inline-graphic xlink:type="simple" xlink:href="moura-ieq6-2406252.gif"/> </alternatives></inline-formula>-order, and show that if the weighted digraph of strain <inline-formula> <tex-math notation="LaTeX">$y$</tex-math><alternatives><inline-graphic xlink:type="simple" xlink:href="moura-ieq7-2406252.gif"/> </alternatives></inline-formula> is <inline-formula><tex-math notation="LaTeX">$\star$</tex-math><alternatives> <inline-graphic xlink:type="simple" xlink:href="moura-ieq8-2406252.gif"/></alternatives></inline-formula>-dominated by the weighted digraph of strain <inline-formula><tex-math notation="LaTeX">$x$</tex-math><alternatives> <inline-graphic xlink:type="simple" xlink:href="moura-ieq9-2406252.gif"/></alternatives></inline-formula>, then <inline-formula> <tex-math notation="LaTeX">$y$</tex-math><alternatives><inline-graphic xlink:type="simple" xlink:href="moura-ieq10-2406252.gif"/> </alternatives></inline-formula> dies out in the long run. We prove that the orbits of the ODE accumulate to an attractor that captures the survival of the fittest phenomenon. Due to the coupled nonlinear high-dimension nature of the ODEs, there is no natural Lyapunov function to study their global qualitative behavior. We prove our results by combining two important properties of these ODEs: (i) monotonicity under a partial ordering on the set of graphs; and (ii) dimension-reduction under symmetry of the graphs. Property (ii) allows us to fully address the survival of the fittest for regular graphs. Then, by bounding the epidemics dynamics for generic networks by the dynamics on regular networks, we prove the result for general networks.04/20/2015 11:48 am PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2015.2406252The Attention Automaton: Sensing Collective User Interests in Social Network Communities
https://www.computer.org/csdl/trans/tn/2015/01/07069269-abs.html
The vast quantity of information shared in social networks has brought us to an age of attention scarcity, where getting users to be attentive to a message is not a given. In fact, it has become the limiting factor in the consumption of information by end users. Understanding what captures the collective attention within a community of users in a social network is invaluable to many applications, such as product marketing, targeted advertising and social or political campaign organization. Several scholars have analyzed how information spreads in social networks under the constraint of attention. However, few papers provide a quantitative method to model and predict attention at every instant in the dynamic social web. In this paper, we propose the <italic>Attention Automaton</italic>, a probabilistic finite automata that can estimate the collective attention of some user community. Communities are based on geographical vicinity of users or having common interests (like followers of a given account) on Twitter. We identify two key factors that drive collective user attention: (1) the attention <italic>volatility</italic> of the community (frequency of change of trending topics), and (2) the selective categorical affinity of the user group towards certain trends. Our results, which are based on a eight-month dataset of Twitter trending topics across 111 geographic regions and audience trends of approximately 50 brands indicate that the proposed <italic>Attention Automaton</italic> can predict audience reception of impending trends based on categorical filters and inherent oscillations in user activity.04/20/2015 11:48 am PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2015.2416691Algorithmic Renormalization for Network Dynamics
https://www.computer.org/csdl/trans/tn/2015/01/07078942-abs.html
The aim of this work is to give a full, elementary exposition of a recently introduced algorithmic technique for renormalizing dynamic networks. The motivation is the analysis of time-varying graphs. We begin by showing how an arbitrary sequence of graphs over a fixed set of nodes can be <italic>parsed</italic> so as to capture hierarchically how information propagates across the nodes. Equipped with parse trees, we are then able to analyze the dynamics of averaging-based multiagent systems. We investigate the case of diffusive influence systems and build a renormalization framework to help resolve their long-term behavior. Introduced as a generalization of the Hegselmann-Krause model of multiagent consensus, these systems allow the agents to have their own, distinct communication rules. We formulate new criteria for the asymptotic periodicity of such systems.04/20/2015 11:48 am PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2015.24191332014 Index IEEE Transactions on Network Science and Engineering Vol. 1
https://www.computer.org/csdl/trans/tn/2015/01/07089349-abs.html
Presents the author/subject index for 2014 for this publication.05/01/2015 3:34 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2015.2400355Random Walks, Markov Processes and the Multiscale Modular Organization of Complex Networks
https://www.computer.org/csdl/trans/tn/2014/02/07010026-abs.html
Most methods proposed to uncover communities in complex networks rely on combinatorial graph properties. Usually an edge-counting quality function, such as modularity, is optimized over all partitions of the graph compared against a null random graph model. Here we introduce a systematic dynamical framework to design and analyze a wide variety of quality functions for community detection. The quality of a partition is measured by its Markov Stability, a time-parametrized function defined in terms of the statistical properties of a Markov process taking place on the graph. The Markov process provides a dynamical sweeping across all scales in the graph, and the time scale is an intrinsic parameter that uncovers communities at different resolutions. This dynamic-based community detection leads to a compound optimization, which favours communities of comparable centrality (as defined by the stationary distribution), and provides a unifying framework for spectral algorithms, as well as different heuristics for community detection, including versions of modularity and Potts model. Our dynamic framework creates a systematic link between different stochastic dynamics and their corresponding notions of optimal communities under distinct (node and edge) centralities. We show that the Markov Stability can be computed efficiently to find multi-scale community structure in large networks.02/04/2015 7:14 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2015.2391998An Efficient Curing Policy for Epidemics on Graphs
https://www.computer.org/csdl/trans/tn/2014/02/07010945-abs.html
We provide a dynamic policy for the rapid containment of a contagion process modeled as an SIS epidemic on a bounded degree undirected graph with <inline-formula><tex-math notation="LaTeX">$n$</tex-math><alternatives> <inline-graphic xlink:href="drakopoulos-ieq1-2393291.gif"/></alternatives></inline-formula> nodes. We show that if the budget <inline-formula><tex-math notation="LaTeX">$r$</tex-math><alternatives> <inline-graphic xlink:href="drakopoulos-ieq2-2393291.gif"/></alternatives></inline-formula> of curing resources available at each time is <inline-formula><tex-math notation="LaTeX">$\Omega (W)$</tex-math><alternatives> <inline-graphic xlink:href="drakopoulos-ieq3-2393291.gif"/></alternatives></inline-formula>, where <inline-formula> <tex-math notation="LaTeX">$W$</tex-math><alternatives><inline-graphic xlink:href="drakopoulos-ieq4-2393291.gif"/> </alternatives></inline-formula> is the CutWidth of the graph, and also of order <inline-formula> <tex-math notation="LaTeX">$\Omega (\log\; n)$</tex-math><alternatives> <inline-graphic xlink:href="drakopoulos-ieq5-2393291.gif"/></alternatives></inline-formula>, then the expected time until the extinction of the epidemic is of order <inline-formula><tex-math notation="LaTeX">$O(n/r)$</tex-math> <alternatives><inline-graphic xlink:href="drakopoulos-ieq6-2393291.gif"/></alternatives></inline-formula>, which is within a constant factor from optimal, as well as sublinear in the number of nodes. Furthermore, if the CutWidth increases only sublinearly with <inline-formula><tex-math notation="LaTeX">$n$</tex-math><alternatives> <inline-graphic xlink:href="drakopoulos-ieq7-2393291.gif"/></alternatives></inline-formula>, a sublinear expected time to extinction is possible with a sublinearly increasing budget <inline-formula><tex-math notation="LaTeX">$r$ </tex-math><alternatives><inline-graphic xlink:href="drakopoulos-ieq8-2393291.gif"/></alternatives></inline-formula>.03/03/2017 5:59 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2015.2393291Synchronization of Diffusively-Connected Nonlinear Systems: Results Based on Contractions with Respect to General Norms
https://www.computer.org/csdl/trans/tn/2014/02/07017573-abs.html
Contraction theory provides an elegant way to analyze the behavior of certain nonlinear dynamical systems. In this paper, we discuss the application of contraction to synchronization of diffusively interconnected components described by nonlinear differential equations. We provide estimates of convergence of the difference in states between components, in the cases of line, complete, and star graphs, and Cartesian products of such graphs. We base our approach on contraction theory, using matrix measures derived from norms that are not induced by inner products. Such norms are the most appropriate in many applications, but proofs cannot rely upon Lyapunov-like linear matrix inequalities, and different techniques, such as the use of the Perron-Frobenious Theorem in the cases of <inline-formula><tex-math notation="LaTeX">$L^1$</tex-math><alternatives> <inline-graphic xlink:type="simple" xlink:href="sontag-ieq1-2395075.gif"/></alternatives></inline-formula> or <inline-formula> <tex-math notation="LaTeX">$L^{\infty }$</tex-math><alternatives><inline-graphic xlink:type="simple" xlink:href="sontag-ieq2-2395075.gif"/> </alternatives></inline-formula> norms, must be introduced.02/04/2015 7:14 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2015.2395075Distributed Online Convex Optimization Over Jointly Connected Digraphs
https://www.computer.org/csdl/trans/tn/2014/01/06930789-abs.html
This paper considers networked online convex optimization scenarios from a regret analysis perspective. At each round, each agent in the network commits to a decision and incurs in a local cost given by functions that are revealed over time and whose unknown evolution model might be adversarially adaptive to the agent’s behavior. The goal of each agent is to incur a cumulative cost over time with respect to the sum of local functions across the network that is competitive with the best single centralized decision in hindsight. To achieve this, agents cooperate with each other using local averaging over time-varying weight-balanced digraphs as well as subgradient descent on the local cost functions revealed in the previous round. We propose a class of coordination algorithms that generalize distributed online subgradient descent and saddle-point dynamics, allowing proportional-integral (and higher-order) feedback on the disagreement among neighboring agents. We show that our algorithm design achieves logarithmic agent regret (when local objectives are strongly convex), or square-root agent regret (when local objectives are convex) in scenarios where the communication graphs are jointly connected. Simulations in a medical diagnosis application illustrate our results.03/09/2015 10:33 am PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2014.2363554Decoding Binary Node Labels from Censored Edge Measurements: Phase Transition and Efficient Recovery
https://www.computer.org/csdl/trans/tn/2014/01/06949658-abs.html
We consider the problem of clustering a graph <inline-formula><tex-math notation="LaTeX">$G$</tex-math><alternatives> <inline-graphic xlink:type="simple" xlink:href="bandeira-ieq1-2368716.gif"/></alternatives></inline-formula> into two communities by observing a subset of the vertex correlations. Specifically, we consider the inverse problem with observed variables <inline-formula><tex-math notation="LaTeX">$Y=B_G x \oplus Z$</tex-math><alternatives> <inline-graphic xlink:type="simple" xlink:href="bandeira-ieq2-2368716.gif"/></alternatives></inline-formula>, where <inline-formula> <tex-math notation="LaTeX">$B_G$</tex-math><alternatives><inline-graphic xlink:type="simple" xlink:href="bandeira-ieq3-2368716.gif"/> </alternatives></inline-formula> is the incidence matrix of a graph <inline-formula><tex-math notation="LaTeX">$G$ </tex-math><alternatives><inline-graphic xlink:type="simple" xlink:href="bandeira-ieq4-2368716.gif"/></alternatives></inline-formula>, <inline-formula><tex-math notation="LaTeX">$x$</tex-math><alternatives> <inline-graphic xlink:type="simple" xlink:href="bandeira-ieq5-2368716.gif"/></alternatives></inline-formula> is the vector of unknown vertex variables (with a uniform prior), and <inline-formula><tex-math notation="LaTeX">$Z$</tex-math><alternatives> <inline-graphic xlink:type="simple" xlink:href="bandeira-ieq6-2368716.gif"/></alternatives></inline-formula> is a noise vector with Bernoulli<inline-formula><tex-math notation="LaTeX">$(\varepsilon )$</tex-math><alternatives> <inline-graphic xlink:type="simple" xlink:href="bandeira-ieq7-2368716.gif"/></alternatives></inline-formula> i.i.d. entries. All variables and operations are Boolean. This model is motivated by coding, synchronization, and community detection problems. In particular, it corresponds to a stochastic block model or a correlation clustering problem with two communities and censored edges. Without noise, exact recovery (up to global flip) of <inline-formula><tex-math notation="LaTeX">$x$ </tex-math><alternatives><inline-graphic xlink:type="simple" xlink:href="bandeira-ieq8-2368716.gif"/></alternatives></inline-formula> is possible if and only the graph <inline-formula><tex-math notation="LaTeX">$G$</tex-math><alternatives> <inline-graphic xlink:type="simple" xlink:href="bandeira-ieq9-2368716.gif"/></alternatives></inline-formula> is connected, with a sharp threshold at the edge probability <inline-formula><tex-math notation="LaTeX">$\log (n)/n$</tex-math><alternatives> <inline-graphic xlink:type="simple" xlink:href="bandeira-ieq10-2368716.gif"/></alternatives></inline-formula> for Erdős-Rényi random graphs. The first goal of this paper is to determine how the edge probability <inline-formula> <tex-math notation="LaTeX">$p$</tex-math><alternatives><inline-graphic xlink:type="simple" xlink:href="bandeira-ieq11-2368716.gif"/> </alternatives></inline-formula> needs to scale to allow exact recovery in the presence of noise. Defining the degree rate of the graph by <inline-formula><tex-math notation="LaTeX">$\alpha =np/\log (n)$</tex-math><alternatives> <inline-graphic xlink:type="simple" xlink:href="bandeira-ieq12-2368716.gif"/></alternatives></inline-formula>, it is shown that exact recovery is possible if and only if <inline-formula><tex-math notation="LaTeX">$\alpha >2/(1-2\varepsilon )^2+ o(1/(1-2\varepsilon )^2)$</tex-math><alternatives><inline-graphic xlink:type="simple" xlink:href="bandeira-ieq13-2368716.gif"/> </alternatives></inline-formula>. In other words, <inline-formula><tex-math notation="LaTeX">$2/(1-2\varepsilon )^2$ </tex-math><alternatives><inline-graphic xlink:type="simple" xlink:href="bandeira-ieq14-2368716.gif"/></alternatives></inline-formula> is the information theoretic threshold for exact recovery at low-SNR. In addition, an efficient recovery algorithm based on semidefinite programming is proposed and shown to succeed in the threshold regime up to twice the optimal rate. For a deterministic graph <inline-formula><tex-math notation="LaTeX">$G$</tex-math><alternatives> <inline-graphic xlink:type="simple" xlink:href="bandeira-ieq15-2368716.gif"/></alternatives></inline-formula>, defining the degree rate as <inline-formula><tex-math notation="LaTeX">$\alpha =d/\log (n)$</tex-math><alternatives> <inline-graphic xlink:type="simple" xlink:href="bandeira-ieq16-2368716.gif"/></alternatives></inline-formula>, where <inline-formula> <tex-math notation="LaTeX">$d$</tex-math><alternatives><inline-graphic xlink:type="simple" xlink:href="bandeira-ieq17-2368716.gif"/> </alternatives></inline-formula> is the minimum degree of the graph, it is shown that the proposed method achieves the rate <inline-formula><tex-math notation="LaTeX">$\alpha > 4((1+\lambda )/(1-\lambda )^2)/(1-2\varepsilon )^2+ o(1/(1-2\varepsilon )^2)$</tex-math><alternatives><inline-graphic xlink:type="simple" xlink:href="bandeira-ieq18-2368716.gif"/> </alternatives></inline-formula>, where <inline-formula><tex-math notation="LaTeX">$1-\lambda$</tex-math><alternatives> <inline-graphic xlink:type="simple" xlink:href="bandeira-ieq19-2368716.gif"/></alternatives></inline-formula> is the spectral gap of the graph <inline-formula><tex-math notation="LaTeX">$G$</tex-math><alternatives> <inline-graphic xlink:type="simple" xlink:href="bandeira-ieq20-2368716.gif"/></alternatives></inline-formula>.03/09/2015 10:33 am PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2014.2368716Robust Network Routing under Cascading Failures
https://www.computer.org/csdl/trans/tn/2014/01/06965647-abs.html
We propose a dynamical model for cascading failures in single-commodity network flows. In the proposed model, the network state consists of flows <italic>and</italic> activation status of the links. Network dynamics is determined by a, possibly state-dependent and adversarial, disturbance process that reduces flow capacity on the links, and routing policies at the nodes that have access to the network state, but are oblivious to the presence of disturbance. Under the proposed dynamics, a link becomes irreversibly inactive either due to overload condition on itself or on all of its immediate downstream links. The coupling between link activation and flow dynamics implies that links to become inactive successively are not necessarily adjacent to each other, and hence the pattern of cascading failure under our model is qualitatively different than standard cascade models. The magnitude of a disturbance process is defined as the sum of cumulative capacity reductions across time and links of the network, and the margin of resilience of the network is defined as the infimum over the magnitude of all disturbance processes under which the links at the origin node become inactive. We propose an algorithm to compute an upper bound on the margin of resilience for the setting where the routing policy only has access to information about the local state of the network. For the limiting case when the routing policies update their action as fast as network dynamics, we identify sufficient conditions on network parameters under which the upper bound is tight under an appropriate routing policy. Our analysis relies on making connections between network parameters and monotonicity in network state evolution under proposed dynamics.02/04/2015 5:12 pm PSThttp://doi.ieeecomputersociety.org/10.1109/TNSE.2014.2373358134