diff --git a/.gitattributes b/.gitattributes index 55cab133643a2a73e083373d2106533678d0edd5..05122713f59a932ed66b3ce38ef6218e36cd018b 100644 --- a/.gitattributes +++ b/.gitattributes @@ -56,3 +56,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text # Video files - compressed *.mp4 filter=lfs diff=lfs merge=lfs -text *.webm filter=lfs diff=lfs merge=lfs -text +cs_metadata_2020.json filter=lfs diff=lfs merge=lfs -text diff --git a/cs_metadata_2020.json b/cs_metadata_2020.json new file mode 100644 index 0000000000000000000000000000000000000000..c7eed8d4308437ce773e63ec380f7df473bc0978 --- /dev/null +++ b/cs_metadata_2020.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6336d9c464833b390e9795b2308fff52c75a78d2655ea8cdcab0217ec7a71cb +size 29489935 diff --git a/txt/2101.04493.txt b/txt/2101.04493.txt new file mode 100644 index 0000000000000000000000000000000000000000..df364c3d17342e37695f8450e95ecf0f2891b2cb --- /dev/null +++ b/txt/2101.04493.txt @@ -0,0 +1,414 @@ +PVDECONV: POINT-VOXEL DECONVOLUTION FOR AUTOENCODING CAD +CONSTRUCTION IN 3D +Kseniya Cherenkova?yDjamila Aouada?Gleb Gusevy +?SnT, University of LuxembourgyArtec3D +ABSTRACT +We propose a Point-V oxel DeConvolution ( PVDeConv ) mod- +ule for 3D data autoencoder. To demonstrate its efficiency we +learn to synthesize high-resolution point clouds of 10k points +that densely describe the underlying geometry of Computer +Aided Design (CAD) models. Scanning artifacts, such as pro- +trusions, missing parts, smoothed edges and holes, inevitably +appear in real 3D scans of fabricated CAD objects. Learning +the original CAD model construction from a 3D scan requires +a ground truth to be available together with the corresponding +3D scan of an object. To solve the gap, we introduce a new +dedicated dataset, the CC3D, containing 50k+ pairs of CAD +models and their corresponding 3D meshes. This dataset is +used to learn a convolutional autoencoder for point clouds +sampled from the pairs of 3D scans - CAD models. The chal- +lenges of this new dataset are demonstrated in comparison +with other generative point cloud sampling models trained on +ShapeNet. The CC3D autoencoder is efficient with respect to +memory consumption and training time as compared to state- +of-the-art models for 3D data generation. +Index Terms —CC3D, point cloud autoencoder, CAD +models generation, Scan2CAD. +1. INTRODUCTION +Recently, deep learning (DL) for 3D data analysis has seen +a boost in successful and competitive solutions for segmen- +tation, detection and classification [1], and real-life applica- +tions, such as self-driving, robotics, medicine, and augmented +reality. In industrial manufacturing, 3D scanning of fabricated +parts is an essential step of product quality control, when the +3D scans of real objects are compared to the original Com- +puter Aided Design (CAD) models. While most consumer +solutions for 3D scanning are good enough for capturing +the general shape of an object, artifacts can be introduced +in the parts of the object that are physically inaccessible for +3D scanning, resulting in the loss of sharp features and fine +details. +This paper focuses on recovering scanning artifacts in an +autoencoding data-driven manner. In addition to presenting a +new point cloud autoencoder, we introduce a new 3D dataset, +referred to as CC3D , which stands for CAD Construction +Fig. 1 . Examples of CC3D data: From left to right, CAD +models, corresponding 3D scans, 10k input point clouds and +results of the proposed autoencoder. +in 3D . We further provide an analysis focused on real 3D +scanned data, keeping in mind real-world constraints, i.e., +variability, complexity, artifacts, memory and speed require- +ments. The first two columns in Fig. 1 give some examples +from CC3D data; the CAD model and its 3D scanned version +in triangular mesh format. While the most recent existing +solutions [2, 3, 4, 5] on 3D data autoencoders mostly focus +on low-resolution data configuration (approximately 2500 +points), we see it more beneficial for real data to experiment +in higher-dimension. It is what brings the important 3D object +details into the big data learning perspective. +Several publicly available datasets related to CAD mod- +elling, such as ModelNet [6], ShapeNet [7], and ABC [8], +have been released in the last years. The summary of the fea- +tures they offer can be found in Table 1. These datasets have +boosted the research on deep learning on 3D point clouds +mainly. +Similarly, our CC3D dataset should support research ef- +forts in addressing real-world challenges. Indeed, this dataset +provides various 3D scanned objects, with their ground-truth +CAD models. The models collected in CC3D dataset are +not restricted to any object’s category and/or complexity. 3D +scans offer challenging cases of missing data, smoothed ge-arXiv:2101.04493v1 [cs.CV] 12 Jan 2021ometry and fusion artefacts in the form of varying protrusions +and swept holes. Moreover, the resolution of 3D scans is typ- +ically high with more than 100k faces in the mesh. +In summary, the contributions of this paper include: (1) +A 3D dataset, CC3D, a collection of 50k+ aligned pairs of +meshes, a CAD model and its virtually 3D scanned coun- +terpart with corresponding scanning artifacts; (2) A CC3D +autoencoder architecture on 10k point clouds learned from +CC3D data; (3) A Point-V oxel DeConvolution ( PVDeConv ) +block for the decoder part of our model, combining point fea- +tures on coarse and fine levels of the data. +The remainder of the paper is organized as follows: Sec- +tion 2 reviews relevant state-of-the-art works in 3D data au- +toencoding. In Section 3 we give a brief overview of the +core components our work is built upon. Section 4 describes +the main contributions of this paper in more details. In Sec- +tion 5 the results and comparison with related methods are +presented. Section 6 gives the conclusions. +2. RELATED WORK +The choice of DL architecture and 3D data representation is +usually defined by existing practices and available datasets +for learning [9]. V oxel-based representations have pioneered +3D data analysis, applying 3D Convolution Neural Network +(CNN) directly on a regular voxel grid [10]. Despite the im- +proved models in terms of memory consumption, e.g., [11], +their inability to resolve fine object details remains the main +limiting factor in practical use. +Other works introduce convolutions directly on graph +structures, e.g., [12]. They attempt to generalize DL models +to non-Euclidean domains such as graphs and manifolds [13], +and offer the analogs of pooling/unpooling operations as +well [14]. However, they are not applicable for learning on +real unconstrained data as they require either meshes to be +registered to a common template, or inefficiently deal with +meshes of up to several thousand faces, or are specific to +segmentation or classification tasks only. +Recent advances in developing efficient architectures for +3D data analysis are mainly related to point cloud based meth- +ods [15, 16]. Decoders [17, 2, 18, 3, 19] have made point +clouds a highly promising representation for 3D object gen- +eration and completion using neural networks. Successful +works in generative adversarial network (GAN) (e.g.,[20]), +show the applicability of different GAN models operating on +the raw point clouds. +In this paper, we comply with the common autoencoder +approach, i.e., we use a point cloud encoder to embed the +point cloud input, and design a decoder to generate a complete +point cloud from the embedding of the encoder.3. BACKGROUND AND MOTIVATION +We herein present the fundamental building blocks that com- +prise the core of this paper, namely, the point cloud, metric on +it, and the DL backbone. All together, these elements make +the CC3D autoencoder perform efficiently on high-resolution +3D data. +A point cloud Scan be represented as S=f(pk; fk)g, +where each pkis the 3D coordinates of the kthinput point, +andfkis the feature corresponding to it, and the size of fk +defines the dimensionality of the points feature space. Note +that while it is straightforward to include auxiliary informa- +tion (such as points’ normals) to our architecture, in this paper +we exclusively employ the xyzcoordinates of pkas the input +data. +We base on Point-V oxel Convolution (PVConv), a mem- +ory efficient architecture for learning on 3D point cloud pre- +sented in [21]. To the best of our knowledge, it is the first de- +velopment of autoencoder based on PVCNN as the encoder. +Briefly, PVConv combines the fine-grained feature transfor- +mation on points with the coarse-grained neighboring feature +aggregation in the voxel space of point cloud. Three basic op- +erations are performed in the coarse branch, namely, voxeliza- +tion, followed by voxel-based 3D convolution, and the devox- +elization. The point-based branch aggregates the features for +each individual with multilayer perceptron (MLP), providing +high resolution details. The features from both branches are +aggregated into a hidden feature representation. +The formulation of convolution in both voxel-based and +point-based cases is the following: +yk=X +xi2N(xk)K(xk; xi)F(xi); (1) +where for each center xk, and its neighborhood N(xk), the +neighboring features F(xi)are convolved with the kernel +K(xk; xi). The choice of PVCNN is due to its efficiency +in training on high-resolution 3D data. Indeed, it makes it +a good candidate for working with real-life data. As it is +stated in [21], PVConv combines advantages of point-based +methods, reducing memory consumption, and voxel-based, +improving the data locality and regularity. +For the loss function, Chamfer distance [22] is used to +measure the quality of the autoencoder. It is a differentiable +metric, invariant to permutation of points in both ground-truth +and target point clouds, SGandS, respectively. It is defined +as follows: +dCD(S; SG) =X +x2Smin +y2SGkxyk2+X +y2SGmin +x2Skxyk2: +(2) +As it follows from its definition, no correspondence or equal +number of points in SandSGis required for the computation +ofdCD, making it possible to work within different resolu- +tions for the encoder and decoder.4. PROPOSED AUTOENCODING OF 3D SCANS TO +CAD MODELS +This paper studies the problem of 3D point cloud autoencod- +ing in a deep learning setup, and in particular, the choice +of the architecture of a 3D point cloud decoder for efficient +reconstruction of point clouds sampled from corresponding +pairs of 3D scans and CAD models. +4.1. CC3D dataset +The CC3D dataset of 3D CAD models was collected from a +free online service for sharing CAD designs [23]. In total, +the collected dataset contains 50k+ models in STEP format, +unrestricted to any category, with varying complexity from +simple to highly detailed designs (see examples in Fig. 1). +These CAD models are converted to meshes, and each mesh +was virtually scanned using proprietary 3D scanning pipeline +developed by Artec3D [24]. The typical size of the result- +ing scans is in the order of 100K points and faces, while the +meshes converted from CAD models are usually more than +an order of magnitude lighter. +In order to illustrate the uniqueness of our dataset, Ta- +ble 1 summarizes the available CAD-like datasets and se- +mantic information they provide. Unlike ShapeNet [7] and +ModelNet [6], the CC3D dataset is a collection of 3D ob- +jects unrestricted to any category, with the complexity vary- +ing from very basic to highly detailed models. One of the +most recent datasets, the ABC dataset [8] would have been a +valuable collection due to its size for our task if it had con- +tained 3D scanned models alongside with ground-truth CAD +objects. The availability of CAD-3D scan pairings, the high- +resolution of meshes and variability of the models make the +CC3D dataset stand out among other alternatives. The CC3D +dataset will be shared with the research community. +4.2. CC3D Autoencoder +Our decoder is a modified version of PVCNN, where we +cut the final classification/segmentation layer. The proposed +Dataset #Models +CAD +Curves +Patches +Semantics +Categories +3D scan +CC3D (ours) 50k+ 3 7 7 7 7 3 +ABC [8] 1M+ 3 3 3 7 7 7 +ShapeNet [7] 3M+ 7 7 7 3 3 7 +ShapeNetCore [7] 51k+ 7 7 7 3 3 7 +ShapeNetSem [7] 12k 7 7 7 3 3 7 +ModelNet [6] 151k+ 7 7 7 3 3 7 +Table 1 . Summary of datasets with CAD-like data. Note that +only ABC and CC3D offer CAD models in b-rep (boundary +representation) format in addition to triangular meshes. +Fig. 2 . Overview of CC3D autoencoder architecture and +PVDeConv module. The features from coarse voxel-based +and fine point-based branches are fused to be unwrapped to +the output point cloud. +PVDeConv structure is depicted in Fig. 2. The fine point- +based branch is implemented as shared transposed MLP, al- +lowing to maintain the same number of points throughout the +autoencoder, while the coarse branch allows the features to be +aggregated at different voxel grid resolutions, thus modelling +the neighborhood information at different scales. +The PVDeConv block consists of 3D volumetric deconvo- +lutions to aggregate the features, dropout, the batch normal- +ization and the nonlinear activation function after each 3D +deconvolution. Features from both branches are fused at the +final level and MLP to produce the output points. +The transposed 3D convolution operator, used in PVDe- +Conv, multiplies each input value element-wise by a learnable +kernel, and sums over the outputs from all input feature chan- +nels. This operation can be seen as the gradient of 3D convo- +lution, although it is not an actual deconvolution operation. +5. EXPERIMENTS +We evaluate the proposed autoencoder by training first on our +CC3D dataset, and then on the ShapeNetCore [7] dataset. +5.1. Training on CC3D +Dataset. CC3D dataset is randomly split into three non- +intersecting folds: 80% for training, 10% for testing and 10% +for validation. Ground-truth point clouds are generated by +uniformly sampling N= 10 k points on the CAD models sur- +faces, while the input point clouds are sampled in the same +manner from corresponding 3D scans of the models. The data +is normalized to (0, 1). +Implementation Details. The encoder follows the struc- +ture in [21], the coarse blocks are ((64, 1, 32), (64, 2, 16), +(128, 1, 16), 1024), where triplets describe voxel-based con- +volutional PVConv block in terms of number of channels, +number of blocks, and voxel resolution. The last number de- +scribes the resulting embedding size for the coarse part, and +being combined with shared MLP cloud blocks = (256, 128), +gives the feature embedding size of 1472. The decoder coarse +blocks are ((128, 1, 16), (64, 2, 16), (64, 1, 32), 128), whereFig. 3 . Results of our autoencoder on CC3D data with 10k +points for input and output. The left of each pair of results +is the input point cloud of 10k, the right is the autoencoder +reconstruction of 10k points. +the triplets are PVDeConv concatenated with decoder point- +based fine blocks of size (256, 128). +Training setup. The autoencoder is trained with Chamfer +loss for 50 epochs on two Quadro P6000 with batch size 80 in +data parallel mode. The overall training takes approximately +15 hours. The best model is chosen based on the validation +set. +Evaluation. The qualitative results of our autoencoder on +the CC3D data are presented in Fig. 3. We notice that the fine +details are captured in these challenging cases. +Method Chamfer distance, 103 +AtlasNet [2] 1.769 +FoldingNet [17] 1.648 +PCN [19] 1.472 +TopNet [3] 0.972 +Ours 0.804 +Table 2 . CC3D autoencoder results on ShapeNetCore +dataset: comparison against previous works ( N= 2:5k). +5.2. Training on ShapeNetCore +To demonstrate the competitive performance of our CC3D +autoencoder, we train it on the ShapeNetCore dataset follow- +ing the train/test/val split of [3], with the number of sampled +point N= 2500 for a fair comparison. Since we do not have +scanned models for the ShapeNet data, we add a 3% Gaussian +noise to each point’s location. The rest of the training setup +is replicated from the CC3D configuration. The final met- +ric is the mean Chamfer distance averaged per model across +all classes. The numbers for other methods are reported +from [3]. The results of the evaluation of our method against +state-of-the-art methods are shown in Table 2. We note that +Fig. 4 . Chamfer distance distribution for our autoencoder. On +test set of CC3D for point clouds of size N= 10 k, mean +Chamfer distance is 1:26103with standard deviation of +0:794103. ShapeNetCore test set with N= 2:5k, it is +0:804103with standard deviation 0:766103. +Fig. 5 . Results of our autoencoder on ShapeNetCore data. +The top row is the input 2.5k point clouds, the bottom is the +reconstruction of our autoencoder. +our result surpasses the previous works by a significant mar- +gin. Qualitative examples on ShapeNetCore data are given in +Fig. 5. The distribution of distances given in Fig. 4 implies +that CC3D dataset presents advanced challenges for our au- +toencoder, where it performs at 1:26103average Chamfer +distance, while it reaches 0:804103on ShapeNetCore. +6. CONCLUSIONS +In this work, we proposed a Point-V oxel Deconvolution +(PVDeConv ) block for a fast and efficient deconvolution +on 3D point clouds. It was used in combination with a new +dataset, CC3D, for autoencoding 3D Scans to their corre- +sponding synthetic CAD models. The CC3D dataset offers +pairs of CAD models and 3D scans, totaling to 50k+ objects. +Our CC3D autoencoder on point clouds is memory and time +efficient. Furthermore, it demonstrates superior results com- +pared to existing methods on ShapeNet data. As future work, +different types of losses will be investigated to improve the +sharpness on edges, such as quadric [5]. Testing the variants +of CC3D autoencoder with different configurations of stacked +PVConv and PVDeConv layers will also be considered. Fi- +nally, we believe that the CC3D dataset itself could assist in +real 3D scanned data analysis with deep learning methods.7. REFERENCES +[1] Yulan Guo, Hanyun Wang, Qingyong Hu, Hao Liu, +Li Liu, and Mohammed Bennamoun, “Deep learning for +3d point clouds: A survey,” ArXiv , vol. abs/1912.12033, +2019. +[2] Thibault Groueix, Matthew Fisher, Vladimir G. Kim, +Bryan C. Russell, and Mathieu Aubry, “Atlasnet: A +papier-m ˆach´e approach to learning 3d surface genera- +tion,” CoRR , vol. abs/1802.05384, 2018. +[3] Lyne P Tchapmi, Vineet Kosaraju, S. Hamid +Rezatofighi, Ian Reid, and Silvio Savarese, “Top- +net: Structural point cloud decoder,” in The IEEE +Conference on Computer Vision and Pattern Recogni- +tion (CVPR) , 2019. +[4] Isaak Lim, Moritz Ibing, and Leif Kobbelt, “A convolu- +tional decoder for point clouds using adaptive instance +normalization,” CoRR , vol. abs/1906.11478, 2019. +[5] Nitin Agarwal, Sung-Eui Yoon, and M. Gopi, “Learning +embedding of 3d models with quadric loss,” CoRR , vol. +abs/1907.10250, 2019. +[6] Zhirong Wu, S. Song, A. Khosla, Fisher Yu, Linguang +Zhang, Xiaoou Tang, and J. Xiao, “3d shapenets: +A deep representation for volumetric shapes,” in +2015 IEEE Conference on Computer Vision and Pattern +Recognition (CVPR) , June 2015, pp. 1912–1920. +[7] Angel X. Chang, Thomas Funkhouser, Leonidas +Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Sil- +vio Savarese, Manolis Savva, Shuran Song, Hao Su, +Jianxiong Xiao, Li Yi, and Fisher Yu, “Shapenet: An +information-rich 3d model repository,” 2015. +[8] Sebastian Koch, Albert Matveev, Zhongshi Jiang, Fran- +cis Williams, Alexey Artemov, Evgeny Burnaev, Marc +Alexa, Denis Zorin, and Daniele Panozzo, “Abc: A +big cad model dataset for geometric deep learning,” in +The IEEE Conference on Computer Vision and Pattern +Recognition (CVPR) , June 2019. +[9] Eman Ahmed, Alexandre Saint, Abd El Rahman +Shabayek, Kseniya Cherenkova, Rig Das, Gleb Gusev, +Djamila Aouada, and Bj ¨orn E. Ottersten, “Deep learn- +ing advances on different 3d data representations: A sur- +vey,” ArXiv , vol. abs/1808.01462, 2018. +[10] D. Maturana and S. Scherer, “V oxnet: A 3d convolu- +tional neural network for real-time object recognition,” +in2015 IEEE/RSJ International Conference on Intelli- +gent Robots and Systems (IROS) , Sep. 2015, pp. 922– +928.[11] Gernot Riegler, Ali Osman Ulusoy, and Andreas Geiger, +“Octnet: Learning deep 3d representations at high res- +olutions,” in Proceedings of the IEEE Conference on +Computer Vision and Pattern Recognition , 2017. +[12] Anurag Ranjan, Timo Bolkart, Soubhik Sanyal, and +Michael J. Black, “Generating 3D faces using convo- +lutional mesh autoencoders,” in European Conference +on Computer Vision (ECCV) , 2018, pp. 725–741. +[13] M. M. Bronstein, J. Bruna, Y . LeCun, A. Szlam, and +P. Vandergheynst, “Geometric deep learning: Going be- +yond euclidean data,” IEEE Signal Processing Maga- +zine, vol. 34, no. 4, pp. 18–42, July 2017. +[14] Rana Hanocka, Amir Hertz, Noa Fish, Raja Giryes, +Shachar Fleishman, and Daniel Cohen-Or, “Meshcnn: +A network with an edge,” ACM Transactions on Graph- +ics (TOG) , vol. 38, no. 4, pp. 90, 2019. +[15] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J. +Guibas, “Pointnet++: Deep hierarchical feature learn- +ing on point sets in a metric space,” CoRR , vol. +abs/1706.02413, 2017. +[16] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E. Sarma, +Michael M. Bronstein, and Justin M. Solomon, “Dy- +namic graph cnn for learning on point clouds,” ACM +Trans. Graph. , vol. 38, no. 5, Oct. 2019. +[17] Yaoqing Yang, Chen Feng, Yiru Shen, and Dong Tian, +“Foldingnet: Interpretable unsupervised learning on 3d +point clouds,” ArXiv , vol. abs/1712.07262, 2017. +[18] Yongheng Zhao, Tolga Birdal, Haowen Deng, and Fed- +erico Tombari, “3d point-capsule networks,” CoRR , vol. +abs/1812.10775, 2018. +[19] Wentao Yuan, Tejas Khot, David Held, Christoph Mertz, +and Martial Hebert, “Pcn: Point completion network,” +in3D Vision (3DV), 2018 International Conference on , +2018. +[20] Chun-Liang Li, Manzil Zaheer, Yang Zhang, Barnab ´as +P´oczos, and Ruslan Salakhutdinov, “Point cloud GAN,” +CoRR , vol. abs/1810.05795, 2018. +[21] Zhijian Liu, Haotian Tang, Yujun Lin, and Song Han, +“Point-voxel cnn for efficient 3d deep learning,” in Ad- +vances in Neural Information Processing Systems , 2019. +[22] Haoqiang Fan, Hao Su, and Leonidas Guibas, ,” in +A Point Set Generation Network for 3D Object Recon- +struction from a Single Image , 07 2017, pp. 2463–2471. +[23] “3dcontentcentral,” https://www. +3dcontentcentral.com , Accessed: 2020-02-02. +[24] “Artec3d,” https://www.artec3d.com/ , Ac- +cessed: 2020-02-020. \ No newline at end of file diff --git a/txt/2101.07621.txt b/txt/2101.07621.txt new file mode 100644 index 0000000000000000000000000000000000000000..e0e28b0f664d7bbd4ac4a636ea01e401c41e1830 --- /dev/null +++ b/txt/2101.07621.txt @@ -0,0 +1,1082 @@ +arXiv:2101.07621v2 [cs.GT] 29 May 2021Trading Transforms of Non-weighted Simple Games +and Integer Weights of Weighted Simple Games∗ +Akihiro Kawana†Tomomi Matsui‡ +June 1, 2021 +Abstract +This study investigates simple games. A fundamental research +question in this field is to determine necessaryand sufficient condition s +for a simple game to be a weighted majority game. Taylor and Zwicker +(1992) showed that a simple game is non-weighted if and only if there +exists a trading transform of finite size. They also provided an uppe r +bound on the size of such a trading transform, if it exists. Gvozdev a +and Slinko (2011) improved that upper bound; their proof employed a +property of linear inequalities demonstrated by Muroga (1971). In this +study, we provide a new proof of the existence of a trading transf orm +when a given simple game is non-weighted. Our proof employs Farkas’ +lemma (1894), and yields an improved upper bound on the size of a +trading transform. +We also discuss an integer-weight representation of a weighted sim- +ple game, improving the bounds obtained by Muroga (1971). We show +that our bound on the quota is tight when the number of players is +less than or equal to five, based on the computational results obt ained +by Kurz (2012). +Furthermore, we discuss the problem of finding an integer-weight +representation under the assumption that we have minimal winning +coalitions and maximal losing coalitions. In particular, we show a +performance of a rounding method. +Lastly, we address roughly weighted simple games. Gvozdeva and +Slinko (2011) showed that a given simple game is not roughly weighted +if and only if there exists a potent certificate of non-weightedness . +∗preliminary version of this paper was presented at Seventh I nternational Workshop +on Computational Social Choice (COMSOC-2018), Rensselaer Polytechnic Institute, Troy, +NY, USA, 25-27 June, 2018. +†Graduate School of Engineering, Tokyo Institute of Technol ogy +‡Graduate School of Engineering, Tokyo Institute of Technol ogy +1We give an upper bound on the length of a potent certificate of non- +weightedness. We also discuss an integer-weight representation o f a +roughly weighted simple game. +1 Introduction +A simple game consists of a pair G= (N,W),whereNis a finite set of +players, and W ⊆2Nis an arbitrary collection of subsets of N. Throughout +this paper, we denote |N|byn. Usually, the property +(monotonicity): if S′⊇S∈ W,thenS′∈ W, (1) +is assumed. Subsets in Ware called winning coalitions . We denote 2N\W +byL, and subsets in Lare called losing coalitions . A simple game ( N,W) +is said to be weighted if there exists a weight vector w∈RNandq∈R +satisfying the following property: +(weightedness): for any S⊆N,S∈ Wif and only if/summationdisplay +i∈Swi≥q.(2) +Previous research established thenecessary andsufficient c onditions that +guarantee the weightedness of a simple. [Elgot, 1961] and [C how, 1961] in- +vestigated the theory of threshold logic and showed the cond ition of the +weightedness in terms of asummability . [Muroga, 1971] proved the suffi- +ciency of asummability based on the theory of linear inequal ity systems +and discussed some variations of their results in cases of a f ew variables. +[Taylor and Zwicker, 1992,Taylor and Zwicker, 1999]obtain ednecessaryand +sufficient conditions independently in terms of a trading transform . Atrad- +ing transform ofsizejisacoalition sequence( X1,X2,...,X j;Y1,Y2,...,Y j), +which may contain repetitions of coalitions, satisfying th e condition ∀p∈N, +|{i|p∈Xi}|=|{i|p∈Yi}|. A simple game is called k-trade robust if there +is no trading transform of size jsatisfying 1 ≤j≤k,X1,X2,...,X j∈ W, +andY1,Y2,...,Y j∈ L. A simple game is called trade robust if it isk-trade +robust for all positive integers k. +Taylor and Zwicker showed that a given simple game Gwithnplayers is +weightedifandonlyif Gis22n-traderobust. In2011, [Gvozdeva and Slinko, 2011] +showed that agiven simplegame Gis weighted ifandonly if Gis (n+1)nn/2- +trade robust. [Freixas and Molinero, 2009b] proposed a vari ant of trade ro- +bustness, called invariant-trade robustness, whichdeter mineswhetherasim- +ple game is weighted. The relations between the results in th reshold logic +and simple games are clarified in [Freixas et al., 2016, Freix as et al., 2017]. +2In Section 2, we show that a given simple game Gis weighted if and +only ifGisαn+1-trade robust, where αn+1denotes the maximal value of +determinants of ( n+1)×(n+1) 0–1 matrices. It is well-known that αn+1≤ +(n+2)n+2 +2(1/2)(n+1). +Our definition of a weighted simple game allows for an arbitra ry real +number of weights. However, any weighted simple game can be r epresented +by integer weights (e.g., see [Freixas and Molinero, 2009a] ). Aninteger- +weight representation of a weighted simple game consists of an integer vec- +torw∈ZNand some q∈Zsatisfying the weightedness property (2). +[Isbell, 1956] found an example of a weighted simple game wit h 12 players +withoutauniqueminimum-suminteger-weight representati on. Examplesfor +9, 10, or11playersaregivenin[Freixas and Molinero, 2009a ,Freixas and Molinero, 2010]. +Inthefieldofthresholdlogic, examples of thresholdfuncti onsrequiringlarge +weightsarediscussedby[Myhill and Kautz, 1961,Muroga, 19 71,H˚ astad, 1994]. +Some previous studies enumerate (minimal) integer-weight representations +of simple games with a small number of players (e.g., [Muroga et al., 1962, +Winder, 1965,Muroga et al., 1970,Krohn and Sudh¨ olter, 199 5]). Inthecase +ofn= 9 players, refer to [Kurz, 2012]. In general, [Muroga, 1971 ] (Proof of +Theorem 9.3.2.1) showed that (under the monotonicity prope rty (1) and +∅ /\e}atio\slash∈ W ∋ N) every weighted simple game has an integer-weight repre- +sentation satisfying 0 ≤wi≤αn≤(n+ 1)n+1 +2(1/2)n(∀i∈N) and +0≤q≤nαn≤n(n+1)n+1 +2(1/2)nsimultaneously. Here, αndenotesthemax- +imal valueof determinantsof n×n0–1matrices. [Wang and Williams, 1991] +discussed Boolean functions that require more general surf aces to sepa- +rate their true vectors from false vectors. [Hansen and Podo lskii, 2015] in- +vestigates the complexity of computing Boolean functions b y polynomial +threshold functions. [Freixas, 2021] discusses a point-se t-additive pseudo- +weighting for a simple game, which assigns weights directly to coalitions. +In Section 3, we slightly improve Muroga’s result and show th at ev- +ery weighted simple game (satisfying ∅ /\e}atio\slash∈ W ∋ N) has an integer-weight +representation ( q;w⊤) satisfying |wi| ≤αn(∀i∈N),|q| ≤αn+1, and +1≤/summationtext +i∈Nwi≤2αn+1−1 simultaneously. Based on the computational +results of [Kurz, 2012], we also demonstrate the tightness o f our bound on +the quota when n≤5. +For a family of minimal winning coalitions, [Peled and Simeo ne, 1985] +proposed a polynomial-time algorithm for checking the weig htedness of a +given simple game. They also showed that for weighted simple games repre- +sented by minimal winning coalitions, all maximal losing co alitions can be +computed in polynomial time. When we have minimal winning co alitions +3and maximal losing coalitions, there exists a linear inequa lity system whose +solution gives a weight vector w∈RNandq∈Rsatisfying property (2). +However, it isless straightforward tofindaninteger-weigh t representation as +the problem transforms from linear programming to integer p rogramming. +In Section 4, we address the problem of finding an integer-wei ght rep- +resentation under the assumption that we have minimal winni ng coalitions +and maximal losing coalitions. We show that an integer-weig ht represen- +tation is obtained by carefully rounding a solution of the li near inequality +system multiplied by at most (2 −√ +2)n+(√ +2−1). +A simple game G= (N,W) is called roughly weighted if there exist a +non-negative vector w∈RN ++and a real number q∈R, not all equal to +zero ((q;w⊤)/\e}atio\slash=0⊤), such that for any S⊆Ncondition/summationtext +i∈Swi< qim- +pliesS/\e}atio\slash∈ W, and/summationtext +i∈Swi> qimpliesS∈ W. We say that ( q;w⊤) is a +rough voting representation forG. Roughly weighted simple games were ini- +tially introduced by [Baugh, 1970]. [Muroga, 1971] (p. 208) studied them +under the name of pseudothreshold functions. [Taylor and Zw icker, 1999] +discussed roughly weighted simple games and constructed se veral examples. +[Gvozdeva and Slinko, 2011] developed a theory of roughly we ighted simple +games. A trading transform ( X1,X2,...,X j;Y1,Y2,...,Y j) with all coali- +tionsX1,X2,...,X jwinningand Y1,Y2,...,Y jlosingiscalled a certificate of +non-weightedness . This certificate is said to be potentif the grand coalition +Nis among X1,X2,...,X jand the empty coalition is among Y1,Y2,...,Y j. +[Gvozdeva and Slinko, 2011] showed that under the the monoto nicity prop- +erty (1) and ∅ /\e}atio\slash∈ W ∋ N, a given simple game Gis not roughly weighted if +and only if thereexists a potent certificate of non-weighted ness whose length +islessthanorequalto( n+1)nn/2. Furtherresearchonroughlyweightedsim- +plegamesappearsin[Gvozdeva et al., 2013,Freixas and Kurz , 2014,Hameed and Slinko, 2015]. +In Section 5, we show that (under the the monotonicity proper ty (1) and +∅ /\e}atio\slash∈ W ∋ N) the length of a potent certificate of non-weightedness is le ss +than or equal to 2 αn+1, if it exists. We also show that a roughly weighted +simple game (satisfying ∅ /\e}atio\slash∈ W ∋ N) has an integer vector ( q;w⊤) of rough +voting representation satisfying 0 ≤wi≤αn−1(∀i∈N), 0≤q≤αnand +0≤/summationtext +i∈Nwi≤2αn. +2 TradingTransformsof Non-weighted Simple Games +In this section, we discuss the size of a trading transform th at guarantees +the non-weightedness of a given simple game. Throughout thi s section, we +do not need to assume the monotonicity property (1). First, w e introduce a +4linear inequality system for determining the weightedness of a given simple +game. For any nonempty family of player subsets ∅ /\e}atio\slash=N ⊆2N, we introduce +a 0–1 matrix A(N) = (a(N)Si) whose rows are indexed by subsets in Nand +columns are indexed by players in Ndefined by +a(N)Si=/braceleftbigg1 (ifi∈S∈ N), +0 (otherwise) . +A given simple game G= (N,W) is weighted if and only if the following +linear inequality system is feasible: +P1:/parenleftbiggA(W)1 0 +−A(L)−1−1/parenrightbigg +w +−q +ε +≥0, +ε >0, +where0(1) denotes a zero vector (all-one vector) of an appropriate di men- +sion. +Farkas’ Lemma [Farkas, 1902] states that P1 is infeasible if and only if +the following system is feasible: +D1: +A(W)⊤−A(L)⊤ +1⊤−1⊤ +0⊤−1⊤ +/parenleftbiggx +y/parenrightbigg += +0 +0 +−1 +, +x≥0,y≥0. +For simplicity, we denote D1 by A1z=c,z≥0,where +A1= +A(W)⊤−A(L)⊤ +1⊤−1⊤ +0⊤−1⊤ +,z=/parenleftbiggx +y/parenrightbigg +,andc= +0 +0 +−1 +. +Subsequently, we assume that D1 is feasible. Let /tildewiderA1z=/tildewidecbe a linear +equality system obtained from A1z=cby repeatedly removing redundant +equalities. A column submatrix /hatwideBof/tildewiderA1is called a basis matrix if/hatwideBis a +square invertible matrix. Variables corresponding to the c olumns of/hatwideBare +calledbasic variables , andJ/hatwideBdenotes an index set of basic variables. A +basic solution with respect to /hatwideBis a vector zdefined by +zi=/braceleftbigg/hatwidezi(i∈J/hatwideB), +0 (i/\e}atio\slash∈J/hatwideB), +5where/hatwidezis a vector of basic variables satisfying /hatwidez=/hatwideB−1/tildewidec. It is well-known +that if a linear inequality system D1 is feasible, then it has a basic feasible +solution. +Letz′be a basic feasible solution of D1 with respect to a basis matr ixB. +By Cramer’s rule, z′ +i= det(Bi)/det(B) for each i∈JB,whereBiis a matrix +formed by replacing i-th column of Bby/tildewidec. Because Biis an integer matrix, +det(B)z′ +i= det(Bi) is an integer for any i∈JB. Let (x′⊤,y′⊤)⊤be a vector +corresponding to z′,and (x∗⊤,y∗⊤) =|det(B)|(x′⊤,y′⊤). Cramer’s rule +states that both x∗andy∗are integer vectors. The pair of vectors x∗and +y∗satisfies the following conditions: +A(W)⊤x∗−A(L)⊤y∗=|det(B)|(A(W)⊤x′−A(L)⊤y′) =|det(B)|0=0,/summationdisplay +S∈Wx∗ +S−/summationdisplay +S∈Ly∗ +S=|det(B)|(1⊤x′−1⊤y′) =|det(B)|0 = 0, +/summationdisplay +S∈Ly∗ +S=|det(B)|1⊤y′=|det(B)|, +x∗=|det(B)|x′≥0,andy∗=|det(B)|y′≥0. +Next, we construct a trading transform corresponding to the pair ofx∗and +y∗. LetX= (X1,X2,...,X|det(B)|) be a sequence of winning coalitions, +where each winning coalition S∈ Wappears in Xx∗ +S-times. Similarly, we +introduce a sequence Y= (Y1,Y2,...,Y|det(B)|),where each losing coalition +S∈ Lappears in Yy∗ +S-times. The above equalities imply that ( X;Y) is a +trading transform of size |det(B)|. Therefore, we have shown that if D1 is +feasible, then a given simple game G= (N,W) is not|det(B)|-trade robust. +Finally, weprovidean upperboundon |det(B)|. Letαnbethemaximum +of the determinants of n×n0–1 matrices. For any n×n0–1 matrix M,it is +easy to show that det( M)≥ −αnby swapping two rows of M(whenn≥2). +If a column of Bis indexed by a component of x(i.e., indexed by a winning +coalition), then each component of the column is either 0 or 1 . Otherwise, +a column (of B) is indexed by a component of y(i.e., indexed by a losing +coalition) whose components are either 0 or −1. Now, we apply elementary +matrix operations to B(see Figure 1). For each column of Bindexed by +a component of y, we multiply the column by ( −1). The resulting matrix, +denoted by B′, is a 0–1 matrix satisfying |det(B)|=|det(B′)|. +AsBis a submatrix of A1, the number of rows (columns) of B, denoted +byn′, is less than or equal to n+ 2. When n′< n+ 2, we obtain the +desired result: |det(B)|=|det(B′)| ≤αn′≤αn+1. Ifn′=n+ 2, then B +has a row vector corresponding to equality 1⊤x−1⊤y= 0, which satisfies +the condition that each component is either 1 or −1, and thus B′has an +60 0 1 1 0−1 +0 1 0 1 0 0 +1 0 0 1 0−1 +1 1 1 0 −1−1 +1 1 1 1 −1−1 +0 0 0 0 −1−1 +B0 0 1 1 0 1 +0 1 0 1 0 0 +1 0 0 1 0 1 +1 1 1 0 1 1 +1 1 1 1 1 1 +0 0 0 0 1 1 +B′ +Figure 1: Example of elementary matrix operations for D1. +all-one row vector. Lemma 2.1 (c1) appearing below states th at|det(B)|= +|det(B′)| ≤αn′−1≤αn+1. +Lemma 2.1. LetMbe ann×n0–1 matrix, where n≥2. +(c1)If a row (column) vector of Mis the all-one vector, then |det(M)| ≤αn−1. +(c2)If a row (column) vector of Mis a 0–1 vector consisting of a unique +0-component and n−11-components, then |det(M)| ≤2αn−1. +Proof of (c1). Assume that the first column of Mis the all-one vector. We +apply the following elementary matrix operations to M(see Figure 2). For +each column of Mexcept the first column, if the first component is equal to +1, then we multiply the column by ( −1) and add the all-one column vector. +The obtained matrix, denoted by M′, is ann×n0–1 matrix satisfying +|det(M)|=|det(M′)|,and the first row is a unit vector. Thus, it is obvious +that|det(M′)| ≤αn−1. +11 0 1 0 +11 1 1 0 +10 1 0 0 +11 1 0 1 +10 0 1 1 +M10 0 0 0 +10 1 0 0 +11 1 1 0 +10 1 1 1 +11 0 0 1 +M′ +Figure 2: Example of elementary matrix operations for (c1). +Proof of (c2). Assume that the first column vector of M, denoted by a, +contains exactly one 0-component. Obviously, e=1−ais a unit vector. +LetM1andMebe a pair of matrices obtained from Mwith the first column +7replaced by 1ande, respectively. Then, it is easy to prove that +|det(M)|=|det(M1)−det(Me)| ≤ |det(M1)|+|det(Me)| ≤2αn−1. +QED +From the above discussion, we obtain the following theorem ( without +the assumption of the monotonicity property (1)). +Theorem 2.2. A given simple game G= (N,W)withnplayers is weighted +if and only if Gisαn+1-trade robust, where αn+1is the maximum of deter- +minants of (n+1)×(n+1)0–1 matrices. +Proof. If a given simple game is not αn+1-trade robust, then it is not trade +robust and, thus, not weighted, as shown by [Taylor and Zwick er, 1992, +Taylor and Zwicker, 1999]. We have discussed the inverse imp lication: if +a given simple game Gis not weighted, then the linear inequality system P1 +is infeasible. Farkas’ lemma [Farkas, 1902] implies that D1 is feasible. From +the above discussion, we have a trading transform ( X1,...,X j;Y1,...Yj) +satisfying j≤αn+1,X1,...,X j∈ W, andY1,...,Y j∈ L. QED +Applying the Hadamard’s evaluation [Hadamard, 1893] of the determi- +nant, we obtain Theorem 2.3. +Theorem 2.3. For any positive integer n,αn≤(n+1)n+1 +2(1/2)n. +The exact values of αnfor small positive integers nappear in “The On- +LineEncyclopediaof Integer Sequences (A003432)” [Sloane et al., 2018] and +Table 1. +3 Integer Weights of Weighted Simple Games +This section reviews the integer-weight representations o f weighted simple +games. Throughoutthis section, we donot need to assume the m onotonicity +property (1), except in Table 1. +Theorem 3.1. Assume that a given simple game G= (N,W)satisfies +∅ /\e}atio\slash∈ W ∋ N. If a given simple game Gis weighted, then there exists an +integer-weight representation (q;w⊤)ofGsatisfying |wi| ≤αn(∀i∈N), +|q| ≤αn+1, and1≤/summationtext +i∈Nwi≤2αn+1−1. +8Proof. It is easy to show that a given simple game G= (N,W) is weighted +if and only if the following linear inequality system is feas ible: +P2:A(W)w≥q1, +A(L)w≤q1−1, +1⊤w≤u−1. +We define +A2= +A(W)10 +−A(L)−10 +−1⊤0 1 +,v= +w +−q +u +,d= +0 +1 +1 +, +and denote the inequality system P2 by A2v≥d. +Subsequently, we assume that P2 is feasible. A non-singular submatrix +/hatwideBofA2is called a basis matrix . Variables corresponding to columns of /hatwideB +are called basic variables , andJ/hatwideBdenotes an index set of basic variables. +Letd/hatwideBbe a subvector of dcorresponding to rows of /hatwideB. Abasic solution +with respect to /hatwideBis a vector vdefined by +vi=/braceleftbigg/hatwidevi(i∈J/hatwideB), +0 (i/\e}atio\slash∈J/hatwideB), +where/hatwidevis a vector of basic variables satisfying /hatwidev=/hatwideB−1d/hatwideB. It is well-known +that if a linear inequality system P2 is feasible, there exis ts a basic feasible +solution. +Let (w′⊤,−q′,u′)⊤be a basic feasible solution of P2 with respect to a +basis matrix B. Assumption ∅ /\e}atio\slash∈ Wimplies that 0 ≤q′−1 and, thus, +−q′/\e}atio\slash= 0. As N∈ W, we have inequalities u′−1≥1⊤w′≥q′≥1,which +imply that u′/\e}atio\slash= 0. The definition of a basic solution implies that −qand +uare basic variables with respect to the basis matrix B. Thus, Bhas +columns corresponding to basic variables −qandu. A column of Bindexed +byuis called the last column. As Bis invertible, the last column of Bis +not the zero vector, and thus Bincludes a row corresponding to inequality +1⊤w≤u−1, which is called the last row (see Figure 3). Here, the numbe r +of rows (columns) of B, denoted by n′, is less than or equal to n+2. +For simplicity, we denote the basic feasible solution ( w′⊤,−q′,u′)⊤by +v′. By Cramer’s rule, v′ +i= det(Bi)/det(B) for each i∈JB,whereBiis +obtained from Bwith a column correspondingto variable vireplaced by dB. +Because Biis an integer matrix, det( B)v′ +i= det(Bi) is an integer for any +9i∈JB. Cramer’s rule states that ( w∗⊤,−q∗,u∗) =|det(B)|(w′⊤,−q′,u′) is +an integer vector satisfying the following conditions: +A(W)w∗=|det(B)|A(W)w′≥ |det(B)|q′1=q∗1, +A(L)w∗=|det(B)|A(L)w′≤ |det(B)|(q′1−1)≤q∗1−1,and +1⊤w∗=|det(B)|1⊤w′≤ |det(B)|(u′−1)≤u∗−1. +From the above, ( q∗;w∗⊤) is an integer-weight representation of G. As +N∈ W, we obtain 1⊤w∗≥q∗=|det(B)|q′≥1. +w1w2w3w4−q u +1 1 1 0 10 +0 1 1 1 10 +0−1−1 0 −10 +−1 0 0 −1−10 +0−1 0 −1−10 +−1−1−1−101 +B +w1w2w3w4−q u +1 1 1 0 00 +0 1 1 1 00 +0−1−1 0 10 +−1 0 0 −110 +0−1 0 −110 +−1−1−1−111 +Bqw1w2w3w4−q +1 1 1 0 0 +0 1 1 1 0 +0−1−1 0 1 +−1 0 0 −11 +0−1 0 −11 +B′ +qw1w2w3w4−q +1 1 1 0 0 +0 1 1 1 0 +0 1 1 0 1 +1 0 0 1 1 +0 1 0 1 1 +B′′ +q +w1w2w3w4−q u +101 0 10 +001 1 10 +01−1 0 −10 +−110−1−10 +010−1−10 +−11−1−101 +B2w1w2w3w4−q +101 0 1 +001 1 1 +01−1 0 −1 +−110−1−1 +010−1−1 +B′ +2w1w2w3w4−q +101 0 1 +001 1 1 +011 0 1 +110 1 1 +010 1 1 +B′′ +2 +w1w2w3w4−q u +1 1 1 0 10 +0 1 1 1 10 +0−1−1 0 −11 +−1 0 0 −1−11 +0−1 0 −1−11 +−1−1−1−101 +Buw1w2w3w4−q u +1 1 1 0 10 +0 1 1 1 10 +0 1 1 0 11 +1 0 0 1 11 +0 1 0 1 11 +1 1 1 1 01 +B′ +u +Figure 3: Examples of elementary matrix operations for P2. +Now, we discuss the magnitude of |q∗|=|det(Bq)|,whereBqis obtained +10fromBwith a column corresponding to variable −qreplaced by dB. As the +last column of Bqis a unit vector, we delete the last column and the last row +fromBqand obtain a matrix B′ +qsatisfying det( Bq) = det(B′ +q). We apply +the following elementary matrix operations to B′ +q. First, we multiply the +column corresponding to variable −q(which is equal to dB) by (−1). Next, +we multiply the rows indexed by losing coalitions by ( −1). The resulting +matrix, denoted by B′′ +q, is 0–1 valued and satisfies the following condition: +|q∗|=|det(Bq)|=|det(B′ +q)|=|det(B′′ +q)| ≤αn′−1≤αn+1. +Next, we show that |w∗ +i| ≤αn(i∈N). Ifw∗ +i/\e}atio\slash= 0, then wiis a basic +variable that satisfies |w∗ +i|=|det(Bi)|,whereBiis obtained from Bbut +the column corresponding to variable wiis replaced by dB. In a manner +similar to that above, we delete the last column and the last r ow from Bi +and obtain a matrix B′ +isatisfying det( Bi) = det(B′ +i). Next, we multiply a +column corresponding to variable wiby (−1). We multiply rows indexed by +losing coalitions by ( −1) and obtain a 0–1 matrix B′′ +i. Matrix Bicontains +a column corresponding to the original variable −q, which contains values 1 +or−1. Thus, matrix B′′ +icontains a column vector that is equal to an all-one +vector. Lemma 2.1 (c1) implies that +|w∗ +i|=|det(Bi)|=|det(B′ +i)|=|det(B′′ +i)| ≤αn′−2≤αn. +Lastly, we discuss the value of |u∗|=|det(Bu)|,whereBuis obtained +fromBbut the last column (column indexed by variable u) is replaced by +dB. In a manner similar to that above, we multiply the last colum n by +(−1), multiply the rows indexed by losing coalitions by ( −1), and multiply +the last row by ( −1). The resulting matrix, denoted by B′ +u, is a 0–1 matrix +in which the last row contains exactly one 0-component (inde xed by variable +−q). Lemma 2.1 (c2) implies that +|u∗|=|det(Bu)|=|det(B′ +u)| ≤2αn′−1≤2αn+1, +and thus 1⊤w∗≤u∗−1≤ |u∗|−1≤2αn+1−1. QED +[Kurz, 2012] exhaustively generated all weighted voting ga mes satisfying +the monotonicity property (1) for up to nine voters. Table 1 s hows max- +ima of the exact values of minimal integer-weight represent ations obtained +by [Kurz, 2012], Muroga’s boundsin [Muroga, 1971], and our u pperbounds. +The table shows that our bound on the quota is tight when n≤5. +11Table 1: Exact values of integer weights representations. +n 1 2 3 4 5 6 7 8 9 10 11 +αn† 1 1 2 3 5 9 32 56 144 320 1458 +max +(N,W)min +[q;w]max +iwi‡1 1 2 3 5 9 18 42 110 +Muroga’s bound (αn)•1 1 2 3 5 9 32 56 144 320 1458 +max +(N,W)min +[q;w]q‡1 2 3 5 9 18 40 105 295 +Our bound (αn+1)1 2 3 5 9 32 56 144 320 1458 +Muroga’s bound (nαn)•1 2 6 12 25 54 224 448 1296 3200 16038 +max +(N,W)min +[q;w]/summationtext +iwi‡1 2 4 8 15 33 77 202 568 +Our bound (2αn+1−1)1 3 5 9 17 63 111 287 639 2915 +†[Sloane et al., 2018], ‡[Kurz, 2012], •[Muroga, 1971]. +4 Rounding Method +This section addresses the problem of findinginteger-weigh t representations. +In this section, we assume the monotonicity property (1). In addition, a +weighted simple game is given by a triplet ( N,Wm,LM),whereWmand +LMdenote the set of minimal winning coalitions and the set of ma ximal +losing coalitions, respectively. We also assume that the em pty set is a losing +coalition, Nis a winning coalition, and every player in Nis not a null +player. Thus, there exists an integer-weight representati on in which q≥1 +andwi≥1 (∀i∈N). +We discuss a problem for findingan integer-weight represent ation, which +is formulated by the following integer programming problem : +Q: find a vector ( q;w) +satisfying/summationdisplay +i∈Swi≥q(∀S∈ Wm), (3) +/summationdisplay +i∈Swi≤q−1 (∀S∈ LM), (4) +q≥1, wi≥1 (∀i∈N), (5) +q∈Z, wi∈Z(∀i∈N). (6) +A linear relaxation problem Q is obtained from Q by dropping the integer +constraints (6). +Let (q∗;w∗⊤) be a basic feasible solution of the linear inequality sys- +temQ. Our proof in the previous section showed that |det(B∗)|(q∗;w∗⊤) +12gives a solution of Q (i.e., an integer-weight representati on), where B∗de- +notes a corresponding basis matrix of Q. When |det(B∗)|> n, there ex- +ists a simple method for generating a smaller integer-weigh t representation. +For any weight vector w= (w1,w2,...,w n)⊤, we denote the integer vector +(⌊w1⌋,⌊w2⌋,...,⌊wn⌋)⊤by⌊w⌋. Given a solution ( q∗;w∗⊤) ofQ, we intro- +duce an integer vector w′=⌊nw∗⌋and an integer q′=⌊n(q∗−1)⌋+1. For +any minimal winning coalition S∈ Wm, we have that +/summationdisplay +i∈Sw′ +i>/summationdisplay +i∈S(nw∗ +i−1)≥n/summationdisplay +i∈Sw∗ +i−n≥nq∗−n=n(q∗−1)≥ ⌊n(q∗−1)⌋, +/summationdisplay +i∈Sw′ +i≥ ⌊n(q∗−1)⌋+1 =q′. +Each maximal losing coalition S∈ LMsatisfies +/summationdisplay +i∈Sw′ +i≤/summationdisplay +i∈Snw∗ +i≤n(q∗−1), +/summationdisplay +i∈Sw′ +i≤ ⌊n(q∗−1)⌋=q′−1. +Thus, the pair of w′andq′gives an integer-weight representation satisfying +(q′;w′⊤)≤n(q∗;w∗⊤). In the remainder of this section, we show that there +exists an integer-weight representation (vector) that is l ess than or equal +to ((2−√ +2)n+(√ +2−1))(q∗;w∗⊤)<(0.5858n+0.4143)(q∗;w∗⊤) for any +solution ( q∗;w∗⊤) ofQ. +Theorem 4.1. Let(q∗;w∗⊤)be a solution of Q. We define ℓ1= (2−√ +2)n− +(√ +2−1)andu1= (2−√ +2)n+(√ +2−1). Then, there exists a real number +λ•∈[ℓ1,u1]so that the pair Q=⌊λ•(q∗−1)⌋+1andW=⌊λ•w∗⌋gives a +feasible solution of Q (i.e., an integer-weight representa tion). +Proof. For any positive real λ, it is easy to see that each maximal losing +coalition S∈ LMsatisfies +/summationdisplay +i∈S⌊λw∗ +i⌋ ≤/summationdisplay +i∈Sλw∗ +i≤λ(q∗−1), +/summationdisplay +i∈S⌊λw∗ +i⌋ ≤ ⌊λ(q∗−1)⌋. +To discuss the weights of minimal winning coalitions, we int roduce a +function g(λ) =λ−/summationtext +i∈N(λw∗ +i−⌊λw∗ +i⌋). In thesecond part of this proof, we +show that if we choose Λ ∈[ℓ1,u1] uniformly at random, then E[ g(Λ)]≥0. +13This implies that ∃λ•∈[ℓ1,u1] satisfying g(λ•)>0, because g(λ) is right- +continuous, piecewise linear, and not a constant function. Wheng(λ•)>0, +each minimal winning coalition S∈ Wmsatisfies +λ•>/summationdisplay +i∈N(λ•w∗ +i−⌊λ•w∗ +i⌋)≥/summationdisplay +i∈S(λ•w∗ +i−⌊λ•w∗ +i⌋) =/summationdisplay +i∈Sλ•w∗ +i−/summationdisplay +i∈S⌊λ•w∗ +i⌋, +(7) +which implies +/summationdisplay +i∈S⌊λ•w∗ +i⌋>/summationdisplay +i∈Sλ•w∗ +i−λ•=λ•/parenleftBigg/summationdisplay +i∈Sw∗ +i−1/parenrightBigg +≥λ•(q∗−1)≥ ⌊λ•(q∗−1)⌋, +and thus /summationdisplay +i∈S⌊λ•w∗ +i⌋ ≥ ⌊λ•(q∗−1)⌋+1. +Finally, we show that E[ g(Λ)]≥0 if we choose Λ ∈[ℓ1,u1] uniformly at +random. It is obvious that +E[g(Λ)] = E[Λ] −/summationdisplay +i∈NE[(Λw∗ +i−⌊Λw∗ +i⌋)] =ℓ1+u1 +2−/summationdisplay +i∈N/integraldisplayu1 +ℓ1(λw∗ +i−⌊λw∗ +i⌋)dλ +u1−ℓ1 += (2−√ +2)n−/summationdisplay +i∈N/integraldisplayu1 +ℓ1(λw∗ +i−⌊λw∗ +i⌋)dλ +u1−ℓ1. +Let us discuss the last term appearing above. By substitutin gµforλw∗ +i, we +obtain +/integraldisplayu1 +ℓ1(λw∗ +i−⌊λw∗ +i⌋)dλ +u1−ℓ1=/integraldisplayu1w∗ +i +ℓ1w∗ +i(µ−⌊µ⌋)dµ +w∗ +i(u1−ℓ1) +≤/integraldisplay0 +−w∗ +i(u1−ℓ1)(µ−⌊µ⌋)dµ +w∗ +i(u1−ℓ1)=/integraldisplay0 +−x(µ−⌊µ⌋)dµ +x, +where the last equality is obtained by setting x=w∗ +i(u1−ℓ1). Asu1−ℓ1= +2(√ +2−1) andw∗ +i≥1, it is clear that x=w∗ +i(u1−ℓ1)≥2(√ +2−1). Here, +we introduce a function f(x) =/integraldisplay0 +−x(µ−⌊µ⌋)dµ +x. According to numerical +14calculations (see Figure 4), inequality x≥2(√ +2−1) implies that f(x)≤ +2−√ +2. +0 1 2 3 4 5 +x0.450.50.550.60.650.70.750.8f(x) +Figure 4: Plot of function f(x) =/integraldisplay0 +−x(µ−⌊µ⌋)dµ +x. +From the above, we obtain the desired result +E[g(Λ)]≥(2−√ +2)n−/summationdisplay +i∈N(2−√ +2) = (2−√ +2)n−(2−√ +2)n= 0. +QED +5 Roughly Weighted Simple Games +In this section, we discuss roughly weighted simple games. F irst, we show +an upper bound of the length of a potent certificate of non-wei ghtedness. +Theorem 5.1. Assume that a given simple game G= (N,W)satisfies ∅ /\e}atio\slash∈ +W ∋Nand the monotonicity property (1). If a given simple game Gis not +roughly weighted, then there exists a potent certificate of n on-weightedness +whose length is less than or equal to 2αn+1. +Proof. Let us introduce a linear inequality system: +P3:/parenleftbiggA(W)1 +−A(L)−1/parenrightbigg/parenleftbiggw +−q/parenrightbigg +≥0, +1⊤w>0. +15First, we show that if P3 is feasible, then a given simple game is roughly +weighted. Let ( q′;w′⊤) be a feasible solution of P3. We introduce a new +voting weight w′′ +i= max{w′ +i,0}for each i∈N. We show that ( q′;w′′⊤) is a +rough voting representation. As 1⊤w′>0, vector w′includes at least one +positivecomponent,andthus w′′/\e}atio\slash=0. Ifacoalition Ssatisfies/summationtext +i∈Sw′′ +i< q′, +thenq′>/summationtext +i∈Sw′′ +i≥/summationtext +i∈Sw′ +i,and thus Sis losing. Consider the case in +which a coalition Ssatisfies/summationtext +i∈Sw′′ +i> q′. LetS′={i∈S|w′ +i>0}. It is +obvious that q′0, and thus Cramer’s rule states that +det(B)u′= det(Bu) (Figure 5 shows an example). We multiply columns +ofBucorresponding to components in ( y⊤,u) by (−1) and obtain a 0–1 +matrixB′ +usatisfying |det(Bu)|=|det(B′ +u)|. As/tildewidec′includes at most one 0- +component, Lemma 2.1 implies that |det(B′ +u)| ≤2αn′−1≤2αn+1. Thus, the +length of ( X;Y) satisfies +/summationdisplay +S∈Wx∗ +S=/summationdisplay +S∈W|det(B)|x′ +S+|det(B)|=|det(B)|(1⊤x′+1) +=|det(B)|(u′−1+1) = |det(B)|u′=|det(B)u′| +=|det(Bu)|=|det(B′ +u)| ≤2αn+1. +QED +In the rest of this section, we discuss integer voting weight s and a quota +of a rough voting representation. We say that a player i∈Nis apasserif +and only if every coalition S∋iis winning. +17u +0 0 1 1 0−10 +0 1 0 1 0 0 0 +1 0 0 1 0−10 +1 1 1 0 −1−10 +1 1 1 1 −1−10 +1 1 1 1 0 0−1 +B +/tildewidec′ +0 0 1 1 0−1−1 +0 1 0 1 0 0−1 +1 0 0 1 0−1−1 +1 1 1 0 −1−1−1 +1 1 1 1 −1−10 +1 1 1 1 0 0−1 +Bu0 0 1 1 0 1 1 +0 1 0 1 0 0 1 +1 0 0 1 0 1 1 +1 1 1 0 1 1 1 +1 1 1 1 1 1 0 +1 1 1 1 0 0 1 +B′ +u +Figure 5: Example of elementary matrix operations for D3+. +Theorem 5.2. Assume that a given simple game G= (N,W)satisfies +∅ /\e}atio\slash∈ W ∋ N.If a given simple game Gis roughly weighted, then there exists +an integer vector (q;w⊤)of the rough voting representation satisfying 0≤ +wi≤αn−1(∀i∈N),0≤q≤αn, and1≤/summationtext +i∈Nwi≤2αn. +Proof. First, we show that if a given game is roughly weighted , then either +P4: +A(W)0 +−A(L)0 +−1⊤1 +/parenleftbiggw +u/parenrightbigg +≥ +1 +−1 +0 +,w≥0,u≥0, +is feasible or there exists at least one passer. Suppose that a given simple +game has a rough voting representation ( q;w⊤). Ifq >0, then (1 /q)w +becomes a feasible solution of P4 by setting uto a sufficiently large positive +number. Consider the case q≤0. Assumption ∅ /\e}atio\slash∈ Wimplies that 0 ≤q, +and thus we obtain q= 0. Properties ( q,w⊤)/\e}atio\slash=0⊤andw≥0imply that +∃i◦∈N,wi◦>0, i.e., a given game Ghas a passer i◦. +When a given game Ghas a passer i◦∈N, then there exists a rough +18voting representation ( q◦;w◦⊤) defined by +w◦ +i=/braceleftbigg1 (i=i◦), +0 (i/\e}atio\slash=i◦),q◦= 0, +which produces the desired result. +Lastly, we consider the case in which P4 is feasible. It is wel l-known that +when P4 is feasible, there exists a basic feasible solution. Let (w′⊤,u′)⊤be +a basic feasible solution of P4 and Bbe a corresponding basis matrix. It is +easy to see that (1; w′⊤) is a rough voting representation of G. Assumption +N∈ Wimplies the positivity of u′becauseu′≥1⊤w′≥1. Then, variable +uis a basic variable, and thus Bincludes a column corresponding to u, +which is called the last column. The non-singularity of Bimplies that a +column corresponding to uis not the zero vector, and thus Bincludes a row +corresponding to the inequality 1⊤w≤u, which is called the last row (see +Figure 6). The number of rows (columns) of basis matrix B, denoted by n′, +is less than or equal to n+1. +Cramer’s rule states that ( q∗,w∗⊤,u∗) =|det(B)|(1,w′⊤,u′) is a non- +negative integer vector. It is easy to see that ( q∗,w∗⊤,u∗) satisfies +A(W)w∗=|det(B)|A(W)w′≥ |det(B)|1=q∗1, +A(L)w∗=|det(B)|A(L)w′≤ |det(B)|1=q∗1,and +1⊤w∗=|det(B)|1⊤w′≤ |det(B)|u′=u∗. +From the above, ( q∗;w∗⊤) is an integer vector of a rough voting represen- +tation. Assumption N∈ Wimplies that 1⊤w∗≥q∗=|det(B)| ≥1. +Letd′ +Bbe a subvector of the right-hand-side vector of an inequalit y sys- +tem in P4 corresponding to rows of B. Cramer’s rule states that det( B)u′= +det(Bu),whereBuis obtained from Bbut the column corresponding to a +basic variable uis replaced by d′ +B(see Figure 6). We multiply rows of Bu +that correspond to losing coalitions by ( −1) and multiply the last row by +(−1). The resulting matrix, denoted by B′ +u, is a 0–1 matrix whose last row +includes exactly one 0-component (indexed by u). Lemma 2.1 (c2) implies +that|det(B′ +u)| ≤2αn′−1≤2αn. Thus, we obtain that +1⊤w∗≤u∗≤ |u∗|=|det(B)u′|=|det(Bu)|=|det(B′ +u)| ≤2αn. +By analogy with the proof of Theorem 3.1, we can prove the desi red inequal- +ities:q∗=|det(B)| ≤αnandw∗ +i≤αn−1(∀i∈N). QED +19w1w2w3w4w5u +1 1 1 0 1 0 +0 1 0 1 1 0 +0−1−1 0 0 0 +−1 0 0 −1−10 +0−1 0 −1 0 0 +−1−1−1−1−11 +Bw1w2w3w4w5u +1 1 1 0 1 1 +0 1 0 1 1 1 +0−1−1 0 0 −1 +−1 0 0 −1−1−1 +0−1 0 −1 0 −1 +−1−1−1−1−10 +Buw1w2w3w4w5u +1 1 1 0 1 1 +0 1 0 1 1 1 +0 1 1 0 0 1 +1 0 0 1 1 1 +0 1 0 1 0 1 +1 1 1 1 1 0 +B′ +u +Figure 6: Examples of elementary matrix operations for P4. +6 Conclusion +In this paper, we discussed the smallest value of k∗such that every k∗-trade +robust simple game would be weighted. We provided a new proof of the +existence of a trading transform when a given simple game is n on-weighted. +Our proof yields an improved upper bound on the required leng th of a +trading transform. We showed that a given simple game Gis weighted if +and only if Gisαn+1-trade robust, where αn+1denotes the maximal value +of determinants of ( n+1)×(n+1) 0–1 matrices. Applying the Hadamard’s +evaluation [Hadamard, 1893] of the determinant, we obtain k∗≤αn+1≤ +(n+2)n+2 +2(1/2)(n+1), which improves the existing bound k∗≤(n+1)nn/2 +obtained by [Gvozdeva and Slinko, 2011]. +Next, we discussed upper bounds for the maximum possible int eger +weights and the quota needed to represent any weighted simpl e game with n +players. We show that every weighted simple game (satisfyin g∅ /\e}atio\slash∈ W ∋ N) +has an integer-weight representation ( q;w⊤)∈Z×ZNsuch that |wi| ≤αn +(∀i∈N),|q| ≤αn+1, and 1≤/summationtext +i∈Nwi≤2αn+1−1. We demonstrated the +tightness of our bound on the quota when n≤5. +We described a rounding method based on a linear relaxation o f an +integer programming problem for finding an integer-weight r epresentation. +We showed that an integer-weight representation is obtaine d by carefully +rounding a solution of the linear inequality system multipl ied byλ•≤ +(2−√ +2)n+(√ +2−1)<0.5858n+0.4143. Our proof of Theorem 4.1 indicates +an existence of a randomized rounding algorithm for finding a n appropriate +valueλ•. However, from theoretical point of view, Theorem 4.1 only s howed +the existence of a real number λ•. Even if there exists an appropriate “ratio- +nal” number λ•, we need to determine the size of the rational number (its +numerator and denominator) to implement a naive randomized rounding +algorithm. Thus, it remains open whether there exists an effic ient algo- +20rithm for finding an integer-weight representation satisfy ing the properties +in Theorem 4.1. +Lastly, we showed that a roughly weighted simple game (satis fying∅ /\e}atio\slash∈ +W ∋N) has an integer vector ( q;w⊤) of the rough voting representation +satisfying 0 ≤wi≤αn−1(∀i∈N), 0≤q≤αn, and 1≤/summationtext +i∈Nwi≤2αn. +When a given simple game is not roughly weighted, we showed th at (under +the the monotonicity property (1) and ∅ /\e}atio\slash∈ W ∋ N) there existed a potent +certificate of non-weightedness whose length is less than or equal to 2 αn+1. +References +[Baugh, 1970] Baugh, C.R.(1970). Pseudo-threshold logic: A generalization +of threshold logic . PhDthesis, UniversityofIllinoisatUrbana-Champaign. +[Chow, 1961] Chow, C.-K. (1961). On the characterization of threshold +functions. In 2nd Annual Symposium on Switching Circuit Theory and +Logical Design (SWCT 1961) , pages 34–38. IEEE. +[Elgot, 1961] Elgot, C. C. (1961). Decision problems of finit e automata de- +sign and related arithmetics. Transactions of the American Mathematical +Society, 98(1):21–51. +[Farkas, 1902] Farkas, J. (1902). Theorie der einfachen Ung leichungen. +Journal f¨ ur die reine und angewandte Mathematik , 1902(124):1–27. +[Freixas, 2021] Freixas, J. (2021). A characterization of w eighted simple +games based on pseudoweightings. Optimization Letters , 15:1371—-1383. +[Freixas et al., 2016] Freixas, J., Freixas, M., and Kurz, S. (2016). +Characterization of threshold functions: state of the art, some +new contributions and open problems. Available at SSRN . +https://ssrn.com/abstract=2740475(March1,2016) . +[Freixas et al., 2017] Freixas, J., Freixas, M., and Kurz, S. (2017). On +the characterization of weighted simple games. Theory and Decision , +83(4):469–498. +[Freixas and Kurz, 2014] Freixas, J. and Kurz, S. (2014). On α-roughly +weighted games. International Journal of Game Theory , 43(3):659–692. +[Freixas and Molinero, 2009a] Freixas, J. and Molinero, X. ( 2009a). On the +existence of a minimum integer representation for weighted voting sys- +tems.Annals of Operations Research , 166(1):243–260. +21[Freixas and Molinero, 2009b] Freixas, J. and Molinero, X. ( 2009b). Simple +games and weighted games: a theoretical and computational v iewpoint. +Discrete Applied Mathematics , 157(7):1496–1508. +[Freixas and Molinero, 2010] Freixas, J. and Molinero, X. (2 010). Weighted +games without a unique minimal representation in integers. Optimisation +Methods & Software , 25(2):203–215. +[Gvozdeva et al., 2013] Gvozdeva, T., Hemaspaandra, L. A., a nd Slinko, A. +(2013). Three hierarchies of simple games parameterized by “resource” +parameters. International Journal of Game Theory , 42(1):1–17. +[Gvozdeva and Slinko, 2011] Gvozdeva, T. and Slinko, A. (201 1). Weighted +and roughly weighted simple games. Mathematical Social Sciences , +61(1):20–30. +[Hadamard, 1893] Hadamard, J. (1893). R´ esolution d’une qu estion relative +aux d´ eterminants. Bull. des Sciences Math. , 2:240–246. +[Hameed and Slinko, 2015] Hameed, A. and Slinko, A. (2015). R oughly +weighted hierarchical simple games. International Journal of Game The- +ory, 44(2):295–319. +[Hansen and Podolskii, 2015] Hansen, K. A. and Podolskii, V. V. (2015). +Polynomial threshold functions and Boolean threshold circ uits.Informa- +tion and Computation , 240:56–73. +[H˚ astad, 1994] H˚ astad, J. (1994). On the size of weights fo r threshold gates. +SIAM Journal on Discrete Mathematics , 7(3):484–492. +[Isbell, 1956] Isbell, J. R. (1956). A class of majority game s.The Quarterly +Journal of Mathematics , 7(1):183–187. +[Krohn and Sudh¨ olter, 1995] Krohn, I. and Sudh¨ olter, P. (1 995). Directed +and weighted majority games. Zeitschrift f¨ ur Operations Research , +42(2):189–216. +[Kurz, 2012] Kurz, S. (2012). On minimum sum representation s for +weighted voting games. Annals of Operations Research , 196(1):361–369. +[Muroga, 1971] Muroga, S. (1971). Threshold Logic and its Applications . +Wiley, New York. +22[Muroga et al., 1962] Muroga, S., Toda, I., and Kondo, M. (196 2). Majority +decision functions of up to six variables. Mathematics of Computation , +16(80):459–472. +[Muroga et al., 1970] Muroga, S., Tsuboi, T., and Baugh, C. R. (1970). +Enumeration of threshold functions of eight variables. IEEE Transactions +on Computers , 100(9):818–825. +[Myhill and Kautz, 1961] Myhill, J. and Kautz, W. H. (1961). O n the size +of weights required for linear-input switching functions. IRE Transactions +on Electronic Computers , EC-10(2):288–290. +[Peled and Simeone, 1985] Peled, U. N. and Simeone, B. (1985) . +Polynomial-time algorithms for regular set-covering and t hreshold syn- +thesis.Discrete Applied Mathematics , 12(1):57–69. +[Sloane et al., 2018] Sloane, N. J. et al. (2018). The on-line encyclopedia of +integer sequences (A003432). Published electronically, . +[Taylor and Zwicker, 1992] Taylor, A. D. and Zwicker, W. S. (1 992). A +characterization of weighted voting. Proceedings of the American mathe- +matical society , 115(4):1089–1094. +[Taylor and Zwicker, 1999] Taylor, A. D. and Zwicker, W. S. (1 999).Sim- +ple Games: Desirability Relations, Trading, Pseudoweight ings. Princeton +University Press. +[Wang and Williams, 1991] Wang, C. and Williams, A. (1991). T he thresh- +old order of a Boolean function. Discrete Applied Mathematics , 31(1):51– +69. +[Winder, 1965] Winder, R. O. (1965). Enumeration of seven-a rgument +threshold functions. IEEE Transactions on Electronic Computers , EC- +14(3):315–325. +23 \ No newline at end of file diff --git a/txt/2102.04993.txt b/txt/2102.04993.txt new file mode 100644 index 0000000000000000000000000000000000000000..c72c58bff378b4ffb7bb64defa967033ded93906 --- /dev/null +++ b/txt/2102.04993.txt @@ -0,0 +1,1202 @@ +JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 1 +Attention-Based Neural Networks for Chroma Intra +Prediction in Video Coding +Marc G ´orriz Blanch, Student Member IEEE, Saverio Blasi, Alan F. Smeaton, Fellow IEEE, +Noel E. O’Connor, Member IEEE, and Marta Mrak, Senior Member IEEE +Abstract —Neural networks can be successfully used to im- +prove several modules of advanced video coding schemes. In +particular, compression of colour components was shown to +greatly benefit from usage of machine learning models, thanks +to the design of appropriate attention-based architectures that +allow the prediction to exploit specific samples in the reference +region. However, such architectures tend to be complex and +computationally intense, and may be difficult to deploy in a +practical video coding pipeline. This work focuses on reducing +the complexity of such methodologies, to design a set of simpli- +fied and cost-effective attention-based architectures for chroma +intra-prediction. A novel size-agnostic multi-model approach is +proposed to reduce the complexity of the inference process. The +resulting simplified architecture is still capable of outperforming +state-of-the-art methods. Moreover, a collection of simplifications +is presented in this paper, to further reduce the complexity +overhead of the proposed prediction architecture. Thanks to +these simplifications, a reduction in the number of parameters +of around 90% is achieved with respect to the original attention- +based methodologies. Simplifications include a framework for re- +ducing the overhead of the convolutional operations, a simplified +cross-component processing model integrated into the original +architecture, and a methodology to perform integer-precision +approximations with the aim to obtain fast and hardware-aware +implementations. The proposed schemes are integrated into the +Versatile Video Coding (VVC) prediction pipeline, retaining +compression efficiency of state-of-the-art chroma intra-prediction +methods based on neural networks, while offering different +directions for significantly reducing coding complexity. +Index Terms —Chroma intra prediction, convolutional neural +networks, attention algorithms, multi-model architectures, com- +plexity reduction, video coding standards. +I. I NTRODUCTION +EFFICIENT video compression has become an essential +component of multimedia streaming. The convergence +of digital entertainment followed by the growth of web ser- +vices such as video conferencing, cloud gaming and real-time +high-quality video streaming, prompted the development of +advanced video coding technologies capable of tackling the +increasing demand for higher quality video content and its con- +sumption on multiple devices. New compression techniques +enable a compact representation of video data by identifying +Manuscript submitted July 1, 2020. The work described in this paper has +been conducted within the project JOLT funded by the European Union’s Hori- +zon 2020 research and innovation programme under the Marie Skłodowska +Curie grant agreement No 765140. +M. G ´orriz Blanch, S. Blasi and M. Mrak are with BBC Research & +Development, The Lighthouse, White City Place, 201 Wood Lane, Lon- +don, UK (e-mail: marc.gorrizblanch@bbc.co.uk, saverio.blasi@bbc.co.uk, +marta.mrak@bbc.co.uk). +A. F. Smeaton and N. E. O’Connor are with Dublin City University, Glas- +nevin, Dublin 9, Ireland (e-mail: alan.smeaton@dcu.ie, noel.oconnor@dcu.ie). +Fig. 1. Visualisation of the attentive prediction process. For each reference +sample 0-16 the attention module generates its contribution to the prediction +of individual pixels from a target 44block. +and removing spatial-temporal and statistical redundancies +within the signal. This results in smaller bitstreams, enabling +more efficient storage and transmission as well as distribution +of content at higher quality, requiring reduced resources. +Advanced video compression algorithms are often complex +and computationally intense, significantly increasing the en- +coding and decoding time. Therefore, despite bringing high +coding gains, their potential for application in practice is +limited. Among the current state-of-the-art solutions, the next +generation Versatile Video Coding standard [1] (referred to as +VVC in the rest of this paper), targets between 30-50% better +compression rates for the same perceptual quality, supporting +resolutions from 4K to 16K as well as 360videos. One +fundamental component of hybrid video coding schemes, intra +prediction, exploits spatial redundancies within a frame by +predicting samples of the current block from already recon- +structed samples in its close surroundings. VVC allows a +large number of possible intra prediction modes to be usedarXiv:2102.04993v1 [eess.IV] 9 Feb 2021JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 2 +on the luma component at the cost of a considerable amount +of signalling data. Conversely, to limit the impact of mode +signalling, chroma components employ a reduced set of modes +[1]. +In addition to traditional modes, more recent research intro- +duced schemes which further exploit cross-component correla- +tions between the luma and chroma components. Such corre- +lations motivated the development of the Cross-Component +Linear Model (CCLM, or simply LM in this paper) intra +modes. When using CCLM, the chroma components are +predicted from already reconstructed luma samples using a +linear model. Nonetheless, the limitation of simple linear +predictions comes from its high dependency on the selection +of predefined reference samples. Improved performance can +be achieved using more sophisticated Machine Learning (ML) +mechanisms [2], [3], which are able to derive more complex +representations of the reference data and hence boost the +prediction capabilities. +Methods based on Convolutional Neural Networks (CNNs) +[2], [4] provided significant improvements at the cost of two +main drawbacks: the associated increase in system complex- +ity and the tendency to disregard the location of individual +reference samples. Related works deployed complex neural +networks (NNs) by means of model-based interpretability +[5]. For instance, VVC recently adopted simplified NN-based +methods such as Matrix Intra Prediction (MIP) modes [6] +and Low-Frequency Non Separable Transform (LFNST) [7]. +For the particular task of block-based intra-prediction, the +usage of complex NN models can be counterproductive if +there is no control over the relative position of the reference +samples. When using fully-connected layers, all input samples +contribute to all output positions, and after the consecutive +application of several hidden layers, the location of each +input sample is lost. This behaviour clearly runs counter +to the design of traditional approaches, in which predefined +directional modes carefully specify which boundary locations +contribute to each prediction position. A novel ML-based +cross-component intra-prediction method is proposed in [4], +introducing a new attention module capable of tracking the +contribution of each neighbouring reference sample when +computing the prediction of each chroma pixel, as shown in +Figure 1. As a result, the proposed scheme better captures +the relationship between the luma and chroma components, +resulting in more accurate prediction samples. However, such +NN-based methods significantly increase the codec complex- +ity, increasing the encoder and decoder times by up to 120% +and 947%, respectively. +This paper focuses on complexity reduction in video coding +with the aim to derive a set of simplified and cost-effective +attention-based architectures for chroma intra-prediction. Un- +derstanding and distilling knowledge from the networks en- +ables the implementation of less complex algorithms which +achieve similar performance to the original models. Moreover, +a novel training methodology is proposed in order to design a +block-independent multi-model which outperforms the state- +of-the-art attention-based architectures and reduces inference +complexity. The use of variable block sizes during training +helps the model to better generalise on content variety whileensuring higher precision on predicting large chroma blocks. +The main contributions of this work are the following: +A competitive block-independent attention-based multi- +model and training methodology; +A framework for complexity reduction of the convolu- +tional operations; +A simplified cross-component processing model using +sparse auto-encoders; +A fast and cost-effective attention-based multi-model with +integer precision approximations. +This paper is organised as follows: Section II provides a +brief overview on the related work, Section III introduces +the attention-based methodology in detail and establishes the +mathematical notation for the rest of the paper, Section IV +presents the proposed simplifications and Section V shows +experimental results, with conclusion drawn in Section VI. +II. B ACKGROUND +Colour images are typically represented by three colour +components (e.g. RGB, YCbCr). The YCbCr colour scheme +is often adopted by digital image and video coding standards +(such as JPEG, MPEG-1/2/4 and H.261/3/4) due to its ability +to compact the signal energy and to reduce the total required +bandwidth. Moreover, chrominance components are often sub- +sampled by a factor of two to conform to the YCbCr 4:2:0 +chroma format, in which the luminance signal contains most of +the spatial information. Nevertheless, cross-component redun- +dancies can be further exploited by reusing information from +already coded components to compress another component. +In the case of YCbCr, the Cross-Component Linear model +(CCLM) [8] uses a linear model to predict the chroma signal +from a subsampled version of the already reconstructed luma +block signal. The model parameters are derived at both the +encoder and decoder sides without needing explicit signalling +in the bitstream. +Another example is the Cross-Component Prediction (CCP) +[9] which resides at the transform unit (TU) level regardless +of the input colour space. In case of YCbCr, a subsampled and +dequantised luma transform block (TB) is used to modify the +chroma TB at the same spatial location based on a context +parameter signalled in the bitstream. An extension of this +concept modifies one chroma component using the residual +signal of the other one [10]. Such methodologies significantly +improved the coding efficiency by further exploiting the cross- +component correlations within the chroma components. +In parallel, recent success of deep learning application +in computer vision and image processing influenced design +of novel video compression algorithms. In particular in the +context of intra-prediction, a new algorithm [3] was introduced +based on fully-connected layers and CNNs to map the predic- +tion of block positions from the already reconstructed neigh- +bouring samples, achieving BD-rate (Bjontegaard Delta rate) +[11] savings of up to 3.0% on average over HEVC, for approx. +200% increase in decoding time. The successful integration +of CNN-based methods for luma intra-prediction into existing +codec architectures has motivated research into alternative +methods for chroma prediction, exploiting cross-componentJOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 3 +Fig. 2. Baseline attention-based architecture for chroma intra prediction presented in [4] and described in Section III. +redundancies similar to the aforementioned LM methods. A +novel hybrid neural network for chroma intra prediction was +recently introduced in [2]. A first CNN was designed to +extract features from reconstructed luma samples. This was +combined with another fully-connected network used to extract +cross-component correlations between neighbouring luma and +chroma samples. The resulting architecture uses complex non- +linear mapping for end-to-end prediction of chroma channels. +However, this is achieved at the cost of disregarding the spatial +location of the boundary reference samples and significant +increase of the complexity of the prediction process. As shown +in [4], after a consecutive application of fully-connected layers +in [2], the location of each input boundary reference sample +is lost. Therefore, the fully-convolutional architecture in [4] +better matches the design of the directional VVC modes and +is able to provide significantly better performance. +The use of attention models enables effective utilisation +of the individual spatial location of the reference samples +[4]. The concept of “attention-based” learning is a well- +known idea used in deep learning frameworks, to improve the +performance of trained networks in complex prediction tasks +[12], [13], [14]. In particular, self-attention is used to assess the +impact of particular input variables on the outputs, whereby +the prediction is computed focusing on the most relevant +elements of the same sequence [15]. The novel attention- +based architecture introduced in [4] reports average BD-rate +reductions of -0.22%, -1.84% and -1.78% for the Y , Cb and +Cr components, respectively, although it significantly impacts +the encoder and decoder time. +One common aspect across all related work is that whilst +the result is an improvement in compression this comes at the +expense of increased complexity of the encoder and decoder. +In order to address the complexity challenge, this paper aims +to design a set of simplified attention-based architectures for +performing chroma intra-prediction faster and more efficiently. +Recent works addressed complexity reduction in neural net- +works using methods such as channel pruning [16], [17], +[18] and quantisation [19], [20], [21]. In particular for videocompression, many works used integer arithmetic in order +to efficiently implement trained neural networks on different +hardware platforms. For example, the work in [22] proposes a +training methodology to handle low precision multiplications, +proving that very low precision is sufficient not just for +running trained networks but also for training them. Similarly, +the work in [23] considers the problem of using variational +latent-variable models for data compression and proposes +integer networks as a universal solution of range coding as +an entropy coding technique. They demonstrate that such +models enable reliable cross-platform encoding and decoding +of images using variational models. Moreover, in order to +ensure deterministic implementations on hardware platforms, +they approximate non-linearities using lookup tables. Finally, +an efficient implementation of matrix-based intra prediction +is proposed in [24], where a performance analysis evaluates +the challenges of deploying models with integer arithmetic +in video coding standards. Inspired by this knowledge, this +paper develops a fast and cost-effective implementation of the +proposed attention-based architecture using integer precision +approximations. As shown Section V-D, while such approxi- +mations can significantly reduce the complexity, the associated +drop of performance is still not negligible. +III. A TTENTION -BASED ARCHITECTURES +This section describes in detail the attention-based approach +proposed in [4] (Figure 2), which will be the baseline for the +presented methodology in this paper. The section also provides +the mathematical notation used for the rest of this paper. +Without loss of generality, only square blocks of pixels +are considered in this work. After intra-prediction and recon- +struction of a luma block in the video compression chain, +luma samples can be used for prediction of co-located chroma +components. In this discussion, the size of a luma block is +assumed to be (downsampled to) NNsamples, which is +the size of the co-located chroma block. This may require the +usage of conventional downsampling operations, such as in +the case of using chroma sub-sampled picture formats suchJOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, OCTOBER 2020 4 +Fig. 3. Proposed multi-model attention-based architectures with the integration of the simplifications introduced in this paper. More details about the model’s +hyperparameters and a description of the referred schemes can be found in Section V. +as 4:2:0. Note that a video coding standard treats all image +samples as unsigned integer values within a certain precision +range based on the internal bit depth. However, in order to +utilise common deep learning frameworks, all samples are +converted to floating point and normalised to values within the +range [0;1]. For the chroma prediction process, the reference +samples used include the co-located luma block X02I RNN, +and the array of reference samples Bc2I Rb,b= 4N+1from +the left and from above the current block (Figure 1), where +c=Y,CborCrrefers to the three colour components. B +is constructed from samples on the left boundary (starting +from the bottom-most sample), then the corner is added, +and finally the samples on top are added (starting from the +left-most sample). In case some reference samples are not +available, these are padded using a predefined value, following +the standard approach defined in VVC. Finally, S02I R3b +is the cross-component volume obtained by concatenating +the three reference arrays BY,BCbandBCr. Similar to +the model in [2], the attention-based architecture adopts a +scheme based on three network branches that are combined to +produce prediction samples, illustrated in Figure 2. The first +two branches work concurrently to extract features from the +input reference samples. +The first branch (referred to as the cross-component bound- +ary branch) extracts cross component features from S02 +I R3bby applying Iconsecutive Di- dimensional 11 +convolutional layers to obtain the Si2I RDiboutput feature +maps, where i= 1;2:::I. By applying 11convolutions, the +boundary input dimensions are preserved, resulting in an Di- +dimensional vector of cross-component information for each +boundary location. The resulting volumes are activated using +a Rectified Linear Unit (ReLU) non-linear function. +In parallel, the second branch (referred to as the luma +convolutional branch) extracts spatial patterns over the co- +located reconstructed luma block X0by applying convolu- +tional operations. The luma convolutional branch is defined +byJconsecutive Cj-dimensional 33convolutional layers +with a stride of 1, to obtainXj2I RCjN2feature maps fromtheN2input samples, where j= 1;2:::J . Similar to the +cross-component boundary branch, in this branch a bias and +a ReLU activation are applied within convolutional layer. +The feature maps ( SIandXJ) from both branches are +each convolved using a 11kernel, to project them into +two corresponding reduced feature spaces. Specifically, SI +is convolved with a filter WF2I RhDto obtain the h- +dimensional feature matrix F. Similarly,XJis convolved with +a filterWG2I RhCto obtain the h-dimensional feature +matrixG. The two matrices are multiplied together to obtain +the pre-attention map M=GTF. Finally, the attention matrix +A2I RN2bis obtained applying a softmax operation to each +element ofM, to generate the probability of each boundary +location being able to predict a sample location in the block. +Each value j;iinAis obtained as: + j;i=exp (mi;j=T) +Pb1 +n=0exp (mn;j=T); (1) +wherej= 0;:::;N21represents the sample location in +the predicted block, i= 0;:::;b1represents a reference +sample location, and Tis the softmax temperature parameter +controlling the smoothness of the generated probabilities, with +0