No gas price hike in present situation BERC

first_imgNo hike of gas price now: BERC. File PhotoBangladesh Energy Regulatory Commission (BERC) on Tuesday said it scrapped the decision of enhancing gas price ‘considering the overall situation’ in the country.The watchdog body announced its decision at a press conference at the commission office in the capital city, reports UNB.“In the present situation we find no reason to raise the gas price”, said BERC chairman Monwar Islam while briefing newsmen.He, however, did not directly respond to the question that there is a perception the commission this time did not hike gas price because of the advice of the prime minister Sheikh Hasina ahead of the general elections.Monwar said the distribution companies appealed for price hike calculating import of 1000 mmcfd LNG gas. But currently, LNG could only supply 300 mmcfd.Secondly, he said, NBR has given supplementary duty waiver on import of LNG.”That’s why we don’t allow raising gas price”, he said.Other commissioners were also present at the briefing.Commission member Mohammad Abdul Aziz, however, said that the government would have to provide Tk 3000 as subsidy to the distribution companies to cover the loss that they will incur due to the import of LNG.He said the commission would ask to provide Tk 1400 from short support fund.He said the commission will consider price hike when the 1000 mmcfd LNG will be supplied to the national network.Earlier, all the eight state-owned downstream entities in gas sector — six distribution companies, one transmission and an LNG marketing company — had appealed to the BERC seeking an average 75 per cent hike on the existing gas prices for different consumer groups except the household and commercial ones.The upward price revision was sought for industrial consumers, power plants, fertiliser factories, captive power plants, and CNG refuelling stations.The distribution companies are Titas Gas Transmission & Distribution Company Limited (Tatas Gas T&D), Bakhrabad Gas Distribution Company Limited (BGDCL), Jalalabad Gas Transmission and Distribution System Limited, Pashchimanchal Gas Company Limited, Karnaphuli Gas Distribution Company Limited and Sundarbans Gas Company Limited (SGCL).While participating in the hearing, the gas entities argued that as per the government decision they had to submit their respective price hike proposals because of the high import cost of LNG as it will push up their cost substantially.The Petrobangla started supplying the imported LNG to national gas network from 18 August through re-gasification by private sector-operated floating storage and re-gasification unit (FSRU).Officials said currently 300 mmcfd gas is being supplied from LNG and it will go up to 500 mmcfd in a month or two and then 1000 mmcfd gas will be flowed from next year as per a government plan.last_img read more

Remembering the Day King Was Killed

first_imgBy Tilesha Brown, Special to the AFROApril 4, 1968.It’s a date that will forever be remembered as the day Rev. Dr. Martin Luther King, Jr. was shot on the balcony of the Black-run Lorraine Hotel in Memphis, Tennessee.On that day, he was on his way to Rev. Samuel Billy Kyles’ house for a soul food dinner when he was shot. Kyles was standing there as the gun went off and the bullet found Dr. King.One day before his death, King had spoken to a large crowd at the Mason Temple Church of God In Christ just a mile away. Bishop J. Louis Felton, the current pastor of Mt. Airy Church of God in Christ in Philadelphia, was there as part of the garbage workers’ strike that was going on in Memphis.Jacqueline Caldwell (left) and Bishop J. Louis Felton (right) recall their experiences in the after of the assassination of Dr. Martin Luther King Jr. (Courtesy photos)According to Felton, it was a cold and stormy day and everyone could tell that Dr. King was tired and emotionally drained. It was a well-known fact by the time he approached the podium that he hadn’t even planned to be there. And when King declared in that speech that he had seen the promised land, everyone in the place felt something.“You got the impression that he knew something was about to happen— you felt it,” Felton told the AFRO, “When you get this premonition that it’s showdown time, your whole mood changes.”In retrospect, Felton said that God actually had added another week to King’s life because his speech and that protest were actually scheduled for the last week in March. However, that year, Memphis was hit with an unusually heavy snowstorm that pushed everything to the first week in April.“God wanted King to live a few more days,” Felton said. “He lived right in the presence of death every day, but this night it was stronger than it had ever been.”He learned that detail from Kyles, whose family had cooked for King that night. King gave the distinct impression that he would not make it to see 40.“I may not get there with you,” King told the Kyles family. “But I want you to know tonight that we, as a people, will get to the Promised Land.”When he walked out of that motel room at approximately 5:45 p.m., just moments passed before the shot rang out. And very shortly after his death, the rioting began.“It was like being in a state of war,” Felton recalls, “because we believed it was an act of war. I believe it was an act of terrorism.”It was a scene of disarray, he said, hearts were sunken everywhere. They had lost their leader, and according to Felton, they knew no one would ever fill that void.“King had something that no one else had,” he said.As the news hit the national press, unrest immediately broke out across the nation. And it didn’t take long for it to explode in Baltimore.Jacqueline Caldwell, president of the Greater Mondawmin Coordinating Council, still lives in the same neighborhood that she lived in when the riots began on Monroe Street.“I was in second grade and I just remember thinking ‘Why is this happening in my neighborhood?’She remembers that it was on TV in black and white and people were crying profusely in her house. She wasn’t allowed to go outside, but she could see the violence clearly out of her window as it made its way through her neighborhood.Authorities had put a curfew into effect and there was rioting in the streets. The nation was in an uproar. According to both Caldwell and Felton, it took years for these cities to regroup.And that’s why Caldwell says it was difficult to see that kind of violence erupt again right in her face in 2015 after the death of Freddie Gray.“I was shielded from it when I was a kid,” she says, “I only saw it from my window, but this time it was really upsetting to see it up close and in real time.”Bishop Felton agrees. But they also are both encouraged by people like King’s granddaughter who spoke at last weekend’s March for Our Lives Rally.“Her speech resonates with what King was about: nonviolent, proactive, and effective change,” Felton Says, “It means that the legacy of King not only survived but it thrives.It means that fifty years later, the dream is still alive.last_img read more

Bryson Starts Youth Sports Coalition to Improve PG County Athletics

first_imgBy Mark F. Gray, AFRO Staff Writer, mgray@afro.comDr. Rick Bryson has long been an advocate for youth sports and participatory activities in Prince George’s County. Bryson has put his money where his money mouth is by creating the County’s Scholastic Sports Awards Banquet and has now begun a new initiative designed to improve the quality of youth and participatory sports.Dr. Bryson’s latest cause is the creation of the Youth Scholastic Coalition, designed to support and advocate for the activities kids participate in while collaborating with the scholastic support network for young athletes throughout Prince George’s County. Building bridges for youth and scholastic sports athletes with an academic support network has been a passion for the longtime podiatrist since his days as a baseball dad when his son was growing up and competing in Accokeek.The PG Youth Sports Coalition hopes to improve the quality of baseball field throughout the county. (Courtesy Photo)During that time Dr. Bryson built a relationship with St. Paul’s United Methodist Church in Fort Washington. For three years youth baseball players from the District and Prince George’s County participated in an academic development program that served as the genesis for the establishment of the coalition. They competed on the field while “barnstorming” throughout the DMV for games during the summer.However, the players also participated in a Scholastic Aptitude Testing Program at the church for 12 weeks over three years. The synergy between the athletics and academics for that group of players helped prepare them for the road to college. Several players part of the initial group would head to schools such as the U.S. Air Force Academy and Morehouse College to pursue their college degrees not just as athletes.“Sports is one thing, but academics will carry you which is what we’re trying to stress,” Bryson told the {AFRO}. “It was a lot easier to get the kids to participate in the learning phase when their peers were a part of the program.”There are many organizations in the community that provide academic and athletic options for youth as individuals and the Coalition is working to connect students with groups that will able to provide additional information and services. For example, organizations with athletic–based programs can expose their participants to services offered by an academic-based program. Those programs can include everything from college preparatory test tutoring to advising families about the NCAA eligibility clearinghouse requirements for potential college student-athletes.“Most groups are focused on the specific activities of their respective organizations, but the Coalition’s focus is on addressing issues that concern everyone,” Bryson said. “The basic premise of establishing the Coalition is the belief that collectively, organizations can be more effective in addressing concerns and issues that have an impact on the community, if there is a positive atmosphere of collaboration.”The Coalition is also serving as the advocate for the various needs related to youth sports programs in P.G. County. The Coalition assists by lobbying to improve facilities and interacts with various state and county officials to engage them in addressing issues of concern for high school sports participants as well. In addition to providing academic support and access to potential scholarship opportunities, one of its goals is to make access to quality fields a priority for all sports participants.The greatest challenge that faces the Coalition is developing corporate partnerships to help sponsor the programs Dr. Bryson and his group are working to establish. Grantors typically look favorably on civic organizations that partner with other groups to address community needs and services. The Coalition helps facilitate these partnerships.“We are aware of the various needs related to youth programs,” Bryson said. “We are working to identify grant-funding sources from various government and non-profit organizations.”last_img read more

Generative Adversarial Networks Generate images using Keras GAN Tutorial

first_imgYou might have worked with the popular MNIST dataset before – but in this article, we will be generating new MNIST-like images with a Keras GAN. It can take a very long time to train a GAN; however, this problem is small enough to run on most laptops in a few hours, which makes it a great example. The following excerpt is taken from the book Deep Learning Quick Reference, authored by Mike Bernico. The network architecture that we will be using here has been found by, and optimized by, many folks, including the authors of the DCGAN paper and people like Erik Linder-Norén, who’s excellent collection of GAN implementations called Keras GAN served as the basis of the code we used here. Loading the MNIST dataset The MNIST dataset consists of 60,000 hand-drawn numbers, 0 to 9. Keras provides us with a built-in loader that splits it into 50,000 training images and 10,000 test images. We will use the following code to load the dataset: from keras.datasets import mnistdef load_data(): (X_train, _), (_, _) = mnist.load_data() X_train = (X_train.astype(np.float32) – 127.5) / 127.5 X_train = np.expand_dims(X_train, axis=3) return X_train As you probably noticed, We’re not returning any of the labels or the testing dataset. We’re only going to use the training dataset. The labels aren’t needed because the only labels we will be using are 0 for fake and 1 for real. These are real images, so they will all be assigned a label of 1 at the discriminator. Building the generator The generator uses a few new layers that we will talk about in this section. First, take a chance to skim through the following code: def build_generator(noise_shape=(100,)): input = Input(noise_shape) x = Dense(128 * 7 * 7, activation=”relu”)(input) x = Reshape((7, 7, 128))(x) x = BatchNormalization(momentum=0.8)(x) x = UpSampling2D()(x) x = Conv2D(128, kernel_size=3, padding=”same”)(x) x = Activation(“relu”)(x) x = BatchNormalization(momentum=0.8)(x) x = UpSampling2D()(x) x = Conv2D(64, kernel_size=3, padding=”same”)(x) x = Activation(“relu”)(x) x = BatchNormalization(momentum=0.8)(x) x = Conv2D(1, kernel_size=3, padding=”same”)(x) out = Activation(“tanh”)(x) model = Model(input, out) print(“– Generator — “) model.summary() return model We have not previously used the UpSampling2D layer. This layer will take increases in the rows and columns of the input tensor, leaving the channels unchanged. It does this by repeating the values in the input tensor. By default, it will double the input. If we give an UpSampling2D layer a 7 x 7 x 128 input, it will give us a 14 x 14 x 128 output. Typically when we build a CNN, we start with an image that is very tall and wide and uses convolutional layers to get a tensor that’s very deep but less tall and wide. Here we will do the opposite. We’ll use a dense layer and a reshape to start with a 7 x 7 x 128 tensor and then, after doubling it twice, we’ll be left with a 28 x 28 tensor. Since we need a grayscale image, we can use a convolutional layer with a single unit to get a 28 x 28 x 1 output. This sort of generator arithmetic is a little off-putting and can seem awkward at first but after a few painful hours, you will get the hang of it! Building the discriminator The discriminator is really, for the most part, the same as any other CNN. Of course, there are a few new things that we should talk about. We will use the following code to build the discriminator: def build_discriminator(img_shape): input = Input(img_shape) x =Conv2D(32, kernel_size=3, strides=2, padding=”same”)(input) x = LeakyReLU(alpha=0.2)(x) x = Dropout(0.25)(x) x = Conv2D(64, kernel_size=3, strides=2, padding=”same”)(x) x = ZeroPadding2D(padding=((0, 1), (0, 1)))(x) x = (LeakyReLU(alpha=0.2))(x) x = Dropout(0.25)(x) x = BatchNormalization(momentum=0.8)(x) x = Conv2D(128, kernel_size=3, strides=2, padding=”same”)(x) x = LeakyReLU(alpha=0.2)(x) x = Dropout(0.25)(x) x = BatchNormalization(momentum=0.8)(x) x = Conv2D(256, kernel_size=3, strides=1, padding=”same”)(x) x = LeakyReLU(alpha=0.2)(x) x = Dropout(0.25)(x) x = Flatten()(x) out = Dense(1, activation=’sigmoid’)(x) model = Model(input, out)print(“– Discriminator — “)model.summary()return model First, you might notice the oddly shaped zeroPadding2D() layer. After the second convolution, our tensor has gone from 28 x 28 x 3 to 7 x 7 x 64. This layer just gets us back into an even number, adding zeros on one side of both the rows and columns so that our tensor is now 8 x 8 x 64. More unusual is the use of both batch normalization and dropout. Typically, these two layers are not used together; however, in the case of GANs, they do seem to benefit the network. Building the stacked model Now that we’ve assembled both the generator and the discriminator, we need to assemble a third model that is the stack of both models together that we can use for training the generator given the discriminator loss. To do that we can just create a new model, this time using the previous models as layers in the new model, as shown in the following code: discriminator = build_discriminator(img_shape=(28, 28, 1))generator = build_generator()z = Input(shape=(100,))img = generator(z)discriminator.trainable = Falsereal = discriminator(img)combined = Model(z, real) Notice that we’re setting the discriminator’s training attribute to False before building the model. This means that for this model we will not be updating the weights of the discriminator during backpropagation. We will freeze these weights and only move the generator weights with the stack. The discriminator will be trained separately. Now that all the models are built, they need to be compiled, as shown in the following code: gen_optimizer = Adam(lr=0.0002, beta_1=0.5)disc_optimizer = Adam(lr=0.0002, beta_1=0.5)discriminator.compile(loss=’binary_crossentropy’,optimizer=disc_optimizer,metrics=[‘accuracy’])generator.compile(loss=’binary_crossentropy’, optimizer=gen_optimizer)combined.compile(loss=’binary_crossentropy’, optimizer=gen_optimizer) If you’ll notice, we’re creating two custom Adam optimizers. This is because many times we will want to change the learning rate for only the discriminator or generator, slowing one or the other down so that we end up with a stable GAN where neither is overpowering the other. You’ll also notice that we’re using beta_1 = 0.5. This is a recommendation from the original DCGAN paper that we’ve carried forward and also had success with. A learning rate of 0.0002 is a good place to start as well, and was found in the original DCGAN paper. The training loop We have previously had the luxury of calling .fit() on our model and letting Keras handle the painful process of breaking the data apart into mini batches and training for us. Unfortunately, because we need to perform the separate updates for the discriminator and the stacked model together for a single batch we’re going to have to do things the old-fashioned way, with a few loops. This is how things used to be done all the time, so while it’s perhaps a little more work, it does admittedly leave me feeling nostalgic. The following code illustrates the training technique: num_examples = X_train.shape[0]num_batches = int(num_examples / float(batch_size))half_batch = int(batch_size / 2) for epoch in range(epochs + 1):for batch in range(num_batches):# noise images for the batchnoise = np.random.normal(0, 1, (half_batch, 100))fake_images = generator.predict(noise)fake_labels = np.zeros((half_batch, 1))# real images for batch idx = np.random.randint(0, X_train.shape[0], half_batch)real_images = X_train[idx]real_labels = np.ones((half_batch, 1))# Train the discriminator (real classified as ones and generated as zeros)d_loss_real = discriminator.train_on_batch(real_images, real_labels)d_loss_fake = discriminator.train_on_batch(fake_images, fake_labels) d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)noise = np.random.normal(0, 1, (batch_size, 100))# Train the generatorg_loss = combined.train_on_batch(noise, np.ones((batch_size, 1)))# Plot the progressprint(“Epoch %d Batch %d/%d [D loss: %f, acc.: %.2f%%] [G loss: %f]” %(epoch,batch, num_batches, d_loss[0], 100 * d_loss[1], g_loss))if batch % 50 == 0:save_imgs(generator, epoch, batch) There is a lot going on here, to be sure. As before, let’s break it down block by block. First, let’s see the code to generate noise vectors: noise = np.random.normal(0, 1, (half_batch, 100)) fake_images = generator.predict(noise) fake_labels = np.zeros((half_batch, 1)) This code is generating a matrix of noise vectors called z) and sending it to the generator. It’s getting a set of generated images back, which we’re calling fake images. We will use these to train the discriminator, so the labels we want to use are 0s, indicating that these are in fact generated images. Note that the shape here is half_batch x 28 x 28 x 1. The half_batch is exactly what you think it is. We’re creating half a batch of generated images because the other half of the batch will be real data, which we will assemble next. To get our real images, we will generate a random set of indices across X_train and use that slice of X_train as our real images, as shown in the following code: idx = np.random.randint(0, X_train.shape[0], half_batch)real_images = X_train[idx]real_labels = np.ones((half_batch, 1)) Yes, we are sampling with replacement in this case. It does work out but it’s probably not the best way to implement minibatch training. It is, however, probably the easiest and most common. Since we are using these images to train the discriminator, and because they are real images, we will assign them 1s as labels, rather than 0s. Now that we have our discriminator training set assembled, we will update the discriminator. Also, note that we aren’t using the soft labels. That’s because we want to keep things as easy as they can be to understand. Luckily the network doesn’t require them in this case. We will use the following code to train the discriminator: # Train the discriminator (real classified as ones and generated as zeros)d_loss_real = discriminator.train_on_batch(real_images, real_labels)d_loss_fake = discriminator.train_on_batch(fake_images, fake_labels)d_loss = 0.5 * np.add(d_loss_real, d_loss_fake) Notice that here we’re using the discriminator’s train_on_batch() method. The train_on_batch() method does exactly one round of forward and backward propagation. Every time we call it, it updates the model once from the model’s previous state. Also, notice that we’re making the update for the real images and fake images separately. This is advice that is given on the GAN hack Git we had previously referenced in the Generator architecture section. Especially in the early stages of training, when real images and fake images are from radically different distributions, batch normalization will cause problems with training if we were to put both sets of data in the same update. Now that the discriminator has been updated, it’s time to update the generator. This is done indirectly by updating the combined stack, as shown in the following code: noise = np.random.normal(0, 1, (batch_size, 100))g_loss = combined.train_on_batch(noise, np.ones((batch_size, 1))) To update the combined model, we create a new noise matrix, and this time it will be as large as the entire batch. We will use that as an input to the stack, which will cause the generator to generate an image and the discriminator to evaluate that image. Finally, we will use the label of 1 because we want to backpropagate the error between a real image and the generated image. Lastly, the training loop reports the discriminator and generator loss at the epoch/batch and then, every 50 batches, of every epoch we will use save_imgs to generate example images and save them to disk, as shown in the following code: print(“Epoch %d Batch %d/%d [D loss: %f, acc.: %.2f%%] [G loss: %f]” % (epoch,batch, num_batches, d_loss[0], 100 * d_loss[1], g_loss))if batch % 50 == 0:save_imgs(generator, epoch, batch) The save_imgs function uses the generator to create images as we go, so we can see the fruits of our labor. We will use the following code to define save_imgs: def save_imgs(generator, epoch, batch): r, c = 5, 5 noise = np.random.normal(0, 1, (r * c, 100)) gen_imgs = generator.predict(noise) gen_imgs = 0.5 * gen_imgs + 0.5 fig, axs = plt.subplots(r, c) cnt = 0 for i in range(r):for j in range(c): axs[i, j].imshow(gen_imgs[cnt, :, :, 0], cmap=’gray’) axs[i, j].axis(‘off’) cnt += 1 fig.savefig(“images/mnist_%d_%d.png” % (epoch, batch)) plt.close() It uses only the generator by creating a noise matrix and retrieving an image matrix in return. Then, using matplotlib.pyplot, it saves those images to disk in a 5 x 5 grid. Performing model evaluation Good is somewhat subjective when you’re building a deep neural network to create images.  Let’s take a look at a few examples of the training process, so you can see for yourself how the GAN begins to learn to generate MNIST. Here’s the network at the very first batch of the very first epoch. Clearly, the generator doesn’t really know anything about generating MNIST at this point; it’s just noise, as shown in the following image: But just 50 batches in, something is happening, as you can see from the following image: And after 200 batches of epoch 0 we can almost see numbers, as you can see from the following image: And here’s our generator after one full epoch. These generated numbers look pretty good, and we can see how the discriminator might be fooled by them. At this point, we could probably continue to improve a little bit, but it looks like our GAN has worked as the computer is generating some pretty convincing MNIST digits, as shown in the following image: Thus, we see the power of GANs in action when it comes to image generation using the Keras library. If you found the above article to be useful, make sure you check out our book Deep Learning Quick Reference, for more such interesting coverage of popular deep learning concepts and their practical implementation. Read Next Keras 2.2.0 releases! 2 ways to customize your deep learning models with Keras How to build Deep convolutional GAN using TensorFlow and Keraslast_img read more

VoX appointed Canadian PR rep for St Vincent and the Grenadines

first_img Travelweek Group TORONTO — Watch for St. Vincent and the Grenadines to make some moves in Canada, as the destination has selected VoX International as its Canadian public relations and communications representative.As such, VoX will manage media relations and day-to-day operations for the Canadian market. It will also host a number of media fam trips to the islands.“By selecting VoX International as our public relations agency in Canada, we are showing our commitment to the Canadian travel market by working with an industry-leading company,” said Glen Beache, CEO, SVGTA. “We cannot wait to welcome Canadian travellers to our destination in 2018.”St. Vincent and The Grenadines is poised to welcome more Canadian this year with direct air service, the opening of its new Argyle International Airport, and a full calendar of festivals.A collection of 32 islands and cays in the Caribbean, St. Vincent and The Grenadines stretches 45 miles south from the main island of St. Vincent and includes eight inhabited islands. Surrounded by water, it’s known to have some of the best sailing waters in the world; the recently opened US$250 million Glossy Bay Marina on Canouan has set the bar for future marine development in the region.More news:  Can you guess the top Instagrammed wedding locations in the world?On land, St. Vincent and The Grenadines offers hiking along the mountainous Vermont Nature Trail, as well as a wide assortment of world-class restaurants on all nine inhabited islands. Blessed with fertile volcanic soil, the islands is known for its locally grown fruits, vegetables and spices, all of which are used to create such specialties as seafood callaloo soup, roasted breadfruit, and fried jack fish.Agents can learn more about the destination by visiting travelweeklearningcentre.com/st-vincent-and-the-grenadines. Monday, March 26, 2018 Share VoX appointed Canadian PR rep for St. Vincent and the Grenadinescenter_img Posted by Tags: St. Vincent and the Grenadines, VoX International << Previous PostNext Post >>last_img read more